Google’s parent company, Alphabet, lost $100 billion in market value Wednesday after its new chatbot had an inaccurate response in a promotional video. Google’s chatbot, Bard, was just announced on Monday. The company is racing to compete with Microsoft-backed ChatGPT, which introduced AI software in November that responds to prompts with accurate and well-written text.
If you would like more context on this matter, please consider Patrick Hall, visiting assistant professor of decision sciences at the George Washington University School of Business. His teaching and research focus on data mining, machine learning, and the responsible use of these technologies.
“There's a lot of off-label use of these new systems for decision-making, which is both understandable and likely a big mistake. For decades the primary focus of machine learning has been decision-making (e.g., lend: yes? no? – and how much? hire: yes? no?) These are called estimation problems and result in an outcome that is a decision or supports a conclusion. In many people's minds, this is what AI/ML is,” Hall says.
“But generative models are different. They are not designed to make decisions but to generate content and help with writing or graphic design. You can ask ChatGPT what to do in a particular scenario, and it will generate a nice-sounding answer. But you shouldn't listen to it. It's not weighing factors based on past known outcomes to arrive at a specific outcome the way a traditional model does! All it's done is generate the most likely content given the prompt you provided. AI companies need to do a better job explaining this to consumers.”
Hall can also discuss other concerns regarding AI research and commercial implementation, including bias and privacy concerns as well as issues surrounding the supply chain and intellectual property. He can also speak to the lack of scientific rigor and lack of genuine understanding that these generative AI systems demonstrate.
If you would like to speak with Professor Hall, please contact GW Media Relations Specialist Cate Douglass at [email protected].
-GW-