Does AI like Your Resume?: Everyone is Scrambling to Use AI Until...
With speaker Justin Daniels, Baker Donelson, USA, Georgia
Join the next AI Group call on 5th December at 2.30 pm UK GMT to hear Speaker Justin Daniels from Baker Donelson in Georgia where he covers the topic….
Does AI like Your Resume?: Everyone is Scrambling to Use AI Until……
How do you feel about having AI review your resume for your next job? That is now a reality at larger companies! This session will evaluate this AI use case in terms of how a business should think about what transparent and responsible AI looks like. From a risk management perspective how do you think about AI issues around privacy, cybersecurity and relevant regulations. The National Institute of Standards and Technology (NIST) released their AI Risk Management Framework in January 2023 to help companies take a holistic approach to AI risk management. Justin Daniels will hit all of these issues as he guides you through this AI use case and explain what the NIST AI framework is and how it might apply to this use case.
In the second half of the call, we’ll open to floor to questions and discussion on the issues raised.
Register in advance for this meeting:
After registering, you will receive a confirmation email containing information about joining the meeting.
Our MI AI Insights group calls are fixed on discussing and refining the AI-related issues necessary to provide client-focused answers in this industry. Over the long term, this means digging into, regularly revising understandings, and learning about these issues:
The Business Case for AI products
Professional Issues around AI use in Law
AI-Specific Regulation (and Market Fragmentation): e.g., EU AI Act, etc
AI Data Governance: AI dataset security, data privacy, right to know regulations, etc
AI Evaluative Performance Metrics: Test benchmarks and performances, hallucinations and other quality of service issues, etc
Product Risks; Special Cases of Commercial Terms: Usage/licensing, termination, IPR (dataset sourcing, copyright liabilities), etc
AI Decision Accountability: risks of harmful content, risks related to model biases (individual or societal), model transparency and justification, and explainability and interpretability
AI Product Copyrights: e.g., the ruling in Thaler v Perlmutter US District Court for DC