By defining the current limits (and thereby the frontiers), many boundaries are shaping, and will continue to shape, the future of Artificial Intelligence (AI). We must push on these boundaries to make further progress into what were yesterday’s frontiers. They are both pliable and resilient—always creating new boundaries of what AI can (or should) achieve.
Among these are technical boundaries (such as processing capacity), psychological boundaries (such as human trust in AI systems), ethical boundaries (such as with AI weapons), and conceptual boundaries (such as the AI people can imagine). It is within these boundaries that we find the construct of needs and the limitations that our current concept of need places on the future AI.
Dr. Ryan Watkins and his colleague Soheil Human (University of Vienna) recently published "Needs-aware artificial intelligence: AI that ‘serves [human] needs’" in the journal AI and Ethics. The article introduces the important role that the construct of need can/will have on the future Artificial Intelligence.
Ryan Watkins is an author of eleven books and more than 95 articles. His publications are frequently cited in the performance improvement literature, making him the 4th most cited author of journal articles in the field.
In 2005 Ryan was a visiting scientist with the National Science Foundation, and he routinely works on projects with the World Bank on applying needs assessment, instructional design, and performance improvement to international assistance programs (including work in China, Laos, Kenya, and Tunisia)
If you are looking for context on this issue or would like to speak with Dr. Watkins, please contact GW Media Relations at [email protected] or 202-994-6460.