Question: Which countries are ahead of the US with AI adoption?
Bassel Haidar's Answer: Only China is ahead in AI adoption because they are not as concerned with Responsible AI, Ethical AI, Privacy, and other best practices that the US holds itself to. But there are other reasons, too. For example, iFlytek, the world's top voice recognition company, has a user base double the number of Apple's Siri users, and Chinese firms control 1/3 of the world's security cameras. In both cases, more data is available to train your voice recognition and computer vision models.
Question: Should the UK gov look to the US as an example for AI adoption?
Bassel Haidar's Answer: The UK and other European countries have stricter data privacy protection laws than the US, such as GDPR. They levy heavy fines, which can cause smaller companies to go out of business. It is hard to say if they should follow the US as an example as their laws and regulations are different than ours and not easily comparable.
Question: Why is Ai adoption so costly?
Bassel Haidar's Answer: AI requires mature processes (i.e., Data and AI strategy/governance), a trained/skilled workforce, and an array of technologies and tools. The data component includes ingestion, transformation, and loading, not to mention data provenance/lineage, metadata, cataloging, etc. Organizations have storage and compute needs (on-prem, cloud, or hybrid) and, most likely, distributed systems and containers. We need data domain SMEs, data engineers, data scientists, cloud engineers, MLOps/DevOps engineers, software engineers, data analysts, etc. The technology tool stack is changing rapidly, and it is hard to keep up. Andrew Ng believes that even after you deploy your model in production, you are only halfway there if you follow a data-centric approach to ML (meaning improving the data, not the ML algorithms, to get better results).
Question: You're seeing computer power as a problem? Really?
Bassel Haidar's Answer: I do, concerning how much it costs to train large models, their energy consumption, and CO2 emissions. Here is a study cited in MIT Technology Review shining a light on these adverse effect of computing power https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/. To train a transformer with 213M parameters, it consumes 656,347 kWh, emits 626,155 pounds of CO2e, and will set you back $1M-$3M of cloud compute costs, and that's only if you train it once. Imagine OpenAI GPT-3 with 175bn parameters and Google PaLM with 540bn parameters. To train these models, you can extrapolate the staggering costs, energy consumption, and CO2e emission.
Question: How is computing power limited given what you said about large language models being easily extensible?
Bassel Haidar's Answer: If you are using transfer learning for Large Language Models (LLM) then there are no adverse effects from a compute power perspective. I was referring to the cost incurred for training LLM or working on new AI solutions such as tracing cosmic bodies, which will also be very costly to train. Please see my response above for the actual model training cost, energy consumption, and CO2e emission.
Question: Is anyone in the government using synthetic data?
Bassel Haidar's Answer: Yes, some agencies have built their own synthetic data generators, requiring contractors to use them for training their models.
Question: Does Guidehouse sell or deliver training on AI to the government?
Bassel Haidar's Answer: We deliver innovation workshops that include AI and other technologies (i.e., Digital Twins).