For more information about papers referenced in Lord Clement-Jones' talk click on the links below:
Handouts for the Event
OECD Survey
As mentioned, Lord Tim as well as Cognilytica are both members of the OECD. The OECD AI Working Group on Implementing Trustworthy AI has launched a short survey to identify and share practical approaches and good practices that can be learned from to implement trustworthy AI systems; that is, AI systems that embody principles such as human rights, fairness, transparency, explainability, security, safety and accountability.
We invite you to share one or more use cases with us by answering a few questions by 31 July 2020 at www.oecd.ai/survey
Bonus Q&A Questions and Answers
Q: "What would you think of a 100% ethical and therefore self-regulated AIG technology? What is the way for us to present this technology?"
"If by AIG you mean artificial general intelligence there is no such thing currently. There may be in the future of course. The levels of autonomy of “narrow” AI however are quite high however and and that is why we need a framework for assessing risk and whether ethical codes, corporate standards or regulation are appropriate, and that will also be relevant for future more general AI."
Q: "Do you consider AI in it's worse form a threat because it's a black box that can be manipulated by it's creators and funders? Or is AI inherently a threat due to it's high impact capacity? Essentially I'm asking is AI a threat because of the possible malicious nature of some individuals or is it an innate threat from the technology itself?"
"I don’t think any technology is inherently bad. it’s all about the humans who created it and deploy it and maybe replicate human biases within it and fail ton design it so it is explainable."
Q: "Any ideas on what kind of body will enforce AI related breach of standards. We have the FBI, CIA, police, etc for different threats. Do you imagine a AI Jedi sort of organization that is dedicated to this issue which as stated earlier, is hard enough to understand let alone ensure it is done right?"
"We already have quite a few regulators in European countries regulating AI in one form or another: in Finance, Competition, Telecoms/Internet, Data Protection. All informed by sets of ethical guidelines. I think the appropriate regulator will depend on the sector. If there is a reasonable sized Jedi sector then I am sure there is a business case to be made for a new regulator!!What is the UK doing to educate politicians, those in government, and the country with regards to AI?Good question. Ongoing! It’s painfully slow. Some politicians think this purely about industrial and military competition and forget the need to carry the public with us. That’s also where the ethics aspect comes in so that we don’t create an an adverse reaction among the public to a new technology (as happened with GMO Foods) Our media tends too talk about Terminator robots when they can which doesn’t help! "