Data Topics
- Data Architecture
- Data Literacy
- Data Science
- Data Strategy
- Data Modeling
- Governance & Quality
- Data Education
- Smart Data News, Articles, & Education

A Brief History of Artificial Intelligence
In 1950, a man named Alan Turing wrote a paper suggesting how to test a “thinking” machine. He believed if a machine could carry on a conversation by way of a teleprinter, imitating a human with no noticeable differences, the machine could be described as thinking. His paper was followed in 1952 by the Hodgkin-Huxley […]

In 1950, a man named Alan Turing wrote a paper suggesting how to test a “thinking” machine. He believed if a machine could carry on a conversation by way of a teleprinter, imitating a human with no noticeable differences, the machine could be described as thinking. His paper was followed in 1952 by the Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses. These combined events, discussed at a conference sponsored by Dartmouth College in 1956, helped to spark the concept of artificial intelligence .
A PC Magazine’s survey showed Google Assistant, Alexa, and Siri are the most popular nonhuman virtual assistants. Have we achieved true artificial intelligence? Sofia Altuna, of Google Assistant, said during an interview:
“Google Assistant brings together all of the technology and smarts we’ve been building for years, from the knowledge graph to natural language processing. Users can have a natural conversation with Google to help them in their user journeys.”
The development of AI has been far from streamlined and efficient. Starting as an exciting, imaginative concept in 1956, artificial intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called “ neural networks ,” were experimented with, and dropped.
Their most advanced programs were only able to handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been overly optimistic in establishing their goals (a recurring theme), and had made naive assumptions about the difficulties they would encounter. After the results they promised never materialized, it should come as no surprise their funding was cut.
The First AI Winter
The stretch of time between 1974 and 1980 has become known as ‘ The First AI Winter .’ AI researchers had two very basic limitations — not enough memory, and processing speeds that would seem abysmal by today’s standards. Much like gravity research at the time, Artificial intelligence research had its government funding cut, and interest dropped off. However, unlike gravity, AI research resumed in the 1980s, with the U.S. and Britain providing funding to compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world leader in computer technology.
The First AI Winter ended with the promising introduction of “Expert Systems,” which were developed and quickly adopted by large competitive corporations all around the world. The primary focus of AI research was now on the theme of accumulating knowledge from various experts, and sharing that knowledge with its users. AI also benefited from the revival of Connectionism in the 1980s.
Expert Systems
Expert Systems were an approach in artificial intelligence research that became popular throughout the 1970s. An Expert System uses the knowledge of experts to create a program. The process involves a user asking the Expert System a question, and receiving an answer, which may or may not be useful. The system answers questions and solves problems within a clearly defined arena of knowledge, and uses “rules” of logic.
The software uses a simplistic design and is reasonably easy to design, build, and modify. Bank loan screening programs provide a good example of an Expert System from the early 1980s, but there were also medical and sales applications using Expert Systems. Generally speaking, these simple programs became quite useful, and started saving businesses large amounts of money. (Expert systems are still available , but much less popular.)
The Second AI Winter
The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON , and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks.
Eventually, Expert Systems simply became too expensive to maintain, when compared to desktop computers. Expert Systems were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI “would not be” the next wave and redirected its funds to projects more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter .
Conversation with Computers Becomes a Reality
Natural language processing ( NLP ) is a subdivision of artificial intelligence which makes human language understandable to computers and machines. Natural language processing was sparked initially by efforts to use computers as translators for the Russian and English languages, in the early 1960s. These efforts led to thoughts of computers that could understand a human language. Efforts to turn those thoughts into a reality were generally unsuccessful, and by 1966, “many” had given up on the idea, completely.
During the late 1980s, Natural language processing experienced a leap in evolution, as a result of both a steady increase in computational power, and the use of new machine learning algorithms. These new algorithms focused primarily on statistical models – as opposed to models like decision trees . During the 1990s, statistical models for NLP rose dramatically.
Intelligent Agents
In the early 1990s, artificial intelligence research shifted its focus to something called intelligent agents. These intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots. With the use of Big Data programs, they have gradually evolved into digital virtual assistants, and chatbots.
Machine Learning
Machine learning is a subdivision of artificial intelligence and is used to develop NLP. Although it has become its own separate industry, performing tasks such as answering phone calls and providing a limited range of appropriate responses, it is still used as a building block for AI. Machine learning, and deep learning , have become important aspects of artificial intelligence.
- Boosting: In 1990, Robert Schapire introduced the concept of boosting in a 1990 paper, The Strength of Weak Learnability . Schapire wrote, “A set of weak learners can create a single strong learner.” The majority of boosting algorithms are repetitive weak learning classifiers that, when added together, form a strong classifier.
- Speech Recognition: Most of the speech recognition training being done is the result of a deep learning technique referred to as long short-term memory (LSTM). This is based on a neural network model developed in 1997, by S. Hochreiter and Jürgen Schmidhuber. The LSTM technique supports learning tasks which use memories of thousands of small steps (this is important for learning speech). Around 2007, LSTM began surpassing the more established speech recognition programs. During 2015, Google’s speech recognition program reported a 49 percent increase in performance by using a LSTM that was CTC-trained .
- Facial Recognition: In 2006, the National Institute of Standards and Technology sponsored the “Face Recognition Grand Challenge,” and tested popular facial recognition algorithms. Various iris images, 3D face scans, and high-resolution facial images were examined. They found some of the new algorithms to be ten times as accurate as the facial recognition algorithms popular in 2002. Some of the new algorithms could surpass humans in recognizing faces (these algorithms could even identify identical twins). In 2012, an ML algorithm developed by Google’s X Lab could sort through and find videos which contained cats. In 2014, the DeepFace algorithm was developed by Facebook — it recognized people in photographs with the same accuracy as humans.
Digital Virtual Assistants and Chatbots
Digital virtual assistants understand spoken commands, and respond by completing tasks.
In 2011, Siri (of Apple) developed a reputation as one of the most popular and successful digital virtual assistants supporting natural language processing. Online assistants, such as Alexa, Siri, and Google, may have started as convenient sources of information about the weather, the latest news, and traffic reports, but advances in NLP and access to massive amounts of data have transformed digital virtual assistants into a useful customer service tool. They are now capable of doing many of the same tasks a human assistant can. They can even tell jokes.
Digital virtual assistants can now manage schedules, make phone calls, take dictation, and read emails aloud. There are many virtual digital assistants on the market today, with Apple’s Siri, Amazon’s Alexa, Google Assistant, and Microsoft’s Cortana as well known examples. Because these AI assistants respond to verbal commands, they can be used hands-free, allowing a person to drink their coffee, or change diapers, while the assistant accomplishes the assigned task.
These virtual assistants represent the future of AI research. They are driving cars, taking the form of robots to provide physical help, and performing research to help with making business decisions. Artificial intelligence is still evolving and finding new uses.
Chatbots and digital virtual assistants are quite similar. Chatbots (sometimes called “conversational agents”) can talk to real people, and are often used for marketing, sales, and customer service. They are typically designed to have human-like conversations with customers, but have also been used for a variety of other purposes. Chatbots are often used by businesses to communicate with customers (or potential customers) and to offer assistance around the clock. They normally have a limited range of topics, focused on a business’ services or products.
Chatbots have enough intelligence to sense context within a conversation and provide the appropriate response. Chatbots, however, cannot seek out answers to queries outside of their topic range or perform tasks on their own. (Virtual assistants can crawl through the available resources and help with a broad range of requests.)
Passing Alan Turing’s Test
In my humble opinion, digital virtual assistants and chatbots have passed Alan Turing’s test, and achieved true artificial intelligence. Current artificial intelligence, with its ability to make decisions, can be described as capable of thinking. If these entities were communicating with a user by way of a teletype, a person might very well assume there was a human at the other end. That these entities can communicate verbally, and recognize faces and other images, far surpasses Turing’s expectations.
Image used under license from Shutterstock.com

Leave a Reply Cancel reply
You must be logged in to post a comment.

An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Advanced Search
- Journal List
- Tzu Chi Med J
- v.32(4); Oct-Dec 2020

The impact of artificial intelligence on human society and bioethics
Michael cheng-tek tai.
Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan
Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.
W HAT IS ARTIFICIAL INTELLIGENCE ?
Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].
Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].
Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].
D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE
From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.
The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.
Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].
In summary, we can see these different functions of AI [ 5 , 6 ]:
- Automation: What makes a system or process to function automatically
- Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
- Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
- Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
- Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.
D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?
Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.
Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.
Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.
T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY
Negative impact.
Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?
Let us see the negative impact the AI will have on human society [ 10 , 11 ]:
- A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
- Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
- Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
- New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
- The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.
P OSITIVE IMPACT
There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:
Fast and accurate diagnostics
IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.
Socially therapeutic robots
Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].
Reduce errors related to human fatigue
Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.
Artificial intelligence-based surgical contribution
AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.
Improved radiology
The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.
Virtual presence
The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.
S OME CAUTIONS TO BE REMINDED
Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].
The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.
T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS
Artificial intelligence ethics must be developed.
Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.
As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.
Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].
The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.
Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:
- Lawful-respecting all applicable laws and regulations
- Ethical-respecting ethical principles and values
- Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].
Seven requirements are recommended [ 18 ]:
- AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
- AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
- Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
- Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
- Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
- AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
- AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.
From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.
S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS
Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].
All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:
- Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
- Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
- Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
- Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.
C ONCLUSION
AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].
Financial support and sponsorship
Conflicts of interest.
There are no conflicts of interest.
R EFERENCES
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- NEWS FEATURE
- 27 September 2023
- Correction 10 October 2023
AI and science: what 1,600 researchers think
- Richard Van Noorden &
- Jeffrey M. Perkel
You can also search for this author in PubMed Google Scholar
You have full access to this article via your institution.

Illustration by Acapulco Studio
Artificial-intelligence (AI) tools are becoming increasingly common in science, and many scientists anticipate that they will soon be central to the practice of research, suggests a Nature survey of more than 1,600 researchers around the world.

Science and the new age of AI: a Nature special
When respondents were asked how useful they thought AI tools would become for their fields in the next decade, more than half expected the tools to be ‘very important’ or ‘essential’. But scientists also expressed strong concerns about how AI is transforming the way that research is done.
The share of research papers that mention AI terms has risen in every field over the past decade, according to an analysis for this article by Nature .
Machine-learning statistical techniques are now well established, and the past few years have seen rapid advances in generative AI, including large language models (LLMs), that can produce fluent outputs such as text, images and code on the basis of the patterns in their training data. Scientists have been using these models to help summarize and write research papers, brainstorm ideas and write code, and some have been testing out generative AI to help produce new protein structures, improve weather forecasts and suggest medical diagnoses, among many other ideas.

See Supplementary information for full methodology.
With so much excitement about the expanding abilities of AI systems, Nature polled researchers about their views on the rise of AI in science, including both machine-learning and generative AI tools.
Focusing first on machine-learning, researchers picked out many ways that AI tools help them in their work. From a list of possible advantages, two-thirds noted that AI provides faster ways to process data, 58% said that it speeds up computations that were not previously feasible, and 55% mentioned that it saves scientists time and money.
“AI has enabled me to make progress in answering biological questions where progress was previously infeasible,” said Irene Kaplow, a computational biologist at Duke University in Durham, North Carolina.

The survey results also revealed widespread concerns about the impacts of AI on science. From a list of possible negative impacts, 69% of the researchers said that AI tools can lead to more reliance on pattern recognition without understanding, 58% said that results can entrench bias or discrimination in data, 55% thought that the tools could make fraud easier and 53% noted that ill-considered use can lead to irreproducible research.
“The main problem is that AI is challenging our existing standards for proof and truth,” said Jeffrey Chuang, who studies image analysis of cancer at the Jackson Laboratory in Farmington, Connecticut.


Essential uses
To assess the views of active researchers, Nature e-mailed more than 40,000 scientists who had published papers in the last 4 months of 2022, as well as inviting readers of the Nature Briefing to take the survey. Because researchers interested in AI were much more likely to respond to the invitation, the results aren’t representative of all scientists. However, the respondents fell into 3 groups: 48% who directly developed or studied AI themselves, 30% who had used AI for their research, and the remaining 22% who did not use AI in their science. (These categories were more useful for probing different responses than were respondents’ research fields, genders or geographical regions; see Supplementary information for full methodology).

Among those who used AI in their research, more than one-quarter felt that AI tools would become ‘essential’ to their field in the next decade, compared with 4% who thought the tools essential now, and another 47% felt AI would be ‘very useful’. (Those whose research field was already AI were not asked this question.) Researchers who don’t use AI were, unsurprisingly, less excited. Even so, 9% felt these techniques would become ‘essential’ in the next decade, and another 34% said they would be ‘very useful’.

Large language models
The chatbot ChatGPT and its LLM cousins were the tools that researchers mentioned most often when asked to type in the most impressive or useful example of AI tools in science (closely followed by protein-folding AI tools, such as AlphaFold, that create 3D models of proteins from amino-acid sequences). But ChatGPT also topped researchers’ choice of the most concerning uses of AI in science. When asked to select from a list of possible negative impacts of generative AI, 68% of researchers worried about proliferating misinformation, another 68% thought that it would make plagiarism easier — and detection harder, and 66% were worried about bringing mistakes or inaccuracies into research papers.

Respondents added that they were worried about faked studies, false information and perpetuating bias if AI tools for medical diagnostics were trained on historically biased data. Scientists have seen evidence of this: a team in the United States reported, for instance, that when they asked the LLM GPT-4 to suggest diagnoses and treatments for a series of clinical case studies, the answers varied depending on the patients’ race or gender (T. Zack et al . Preprint at medRxiv https://doi.org/ktdz ; 2023) — probably reflecting the text that the chatbot was trained on.
“There is clearly misuse of large language models, inaccuracy and hollow but professional-sounding results that lack creativity,” said Isabella Degen, a software engineer and former entrepreneur who is now studying for a PhD in using AI in medicine at the University of Bristol, UK. “In my opinion, we don’t understand well where the border between good use and misuse is.”
The clearest benefit, researchers thought, was that LLMs aided researchers whose first language is not English, by helping to improve the grammar and style of their research papers, or to summarize or translate other work. “A small number of malicious players notwithstanding, the academic community can demonstrate how to use these tools for good,” said Kedar Hippalgaonkar, a materials scientist at Nanyang Technological University in Singapore.

Researchers who regularly use LLMs at work are still in a minority, even among the interested group who took Nature ’s survey. Some 28% of those who studied AI said they used generative AI products such as LLMs every day or more than once a week, 13% of those who only use AI said they did, and just 1% among others, although many had at least tried the tools.

Moreover, the most popular use among all groups was for creative fun unrelated to research (one respondent used ChatGPT to suggest recipes); a smaller share used the tools to write code, brainstorm research ideas and to help write research papers.

Some scientists were unimpressed by the output of LLMs. “It feels ChatGPT has copied all the bad writing habits of humans: using a lot of words to say very little,” one researcher who uses the LLM to help copy-edit papers wrote. Although some were excited by the potential of LLMs for summarizing data into narratives, others had a negative reaction. “If we use AI to read and write articles, science will soon move from ‘for humans by humans’ to ‘for machines by machines’,” wrote Johannes Niskanen, a physicist at the University of Turku in Finland.
Barriers to progress
Around half of the scientists in the survey said that there were barriers preventing them from developing or using AI as much as they would like — but the obstacles seem to be different for different groups. The researchers who directly studied AI were most concerned about a lack of computing resources, funding for their work and high-quality data to run AI on. Those who work in other fields but use AI in their research tended to be more worried by a lack of skilled scientists and training resources, and they also mentioned security and privacy considerations. Researchers who didn’t use AI generally said that they didn’t need it or find it useful, or that they lacked experience or time to investigate it.

Another theme that emerged from the survey was that commercial firms dominate computing resources for AI and ownership of AI tools — and this was a concern for some respondents. Of the scientists in the survey who studied AI, 23% said they collaborated with — or worked at — firms developing these tools (with Google and Microsoft the most often named), whereas 7% of those who used AI did so. Overall, slightly more than half of those surveyed felt it was ‘very’ or ‘somewhat’ important that researchers using AI collaborate with scientists at such firms.

The principles of LLMs can be usefully applied to build similar models in bioinformatics and cheminformatics, says Garrett Morris, a chemist at the University of Oxford, UK, who works on software for drug discovery, but it’s clear that the models must be extremely large. “Only a very small number of entities on the planet have the capabilities to train the very large models — which require large numbers of GPUs [graphics processing units], the ability to run them for months, and to pay the electricity bill. That constraint is limiting science’s ability to make these kinds of discoveries,” he says.
Researchers have repeatedly warned that the naive use of AI tools in science can lead to mistakes, false positives and irreproducible findings — potentially wasting time and effort. And in the survey, some scientists said they were concerned about poor-quality research in papers that used AI. “Machine learning can sometimes be useful, but AI is causing more damage than it helps. It leads to false discoveries due to scientists using AI without knowing what they are doing,” said Lior Shamir, a computer scientist at Kansas State University in Manhattan.
When asked if journal editors and peer reviewers could adequately review papers that used AI, respondents were split. Among the scientists who used AI for their work but didn’t directly develop it, around half said they didn’t know, one-quarter thought reviews were adequate, and one-quarter thought they were not. Those who developed AI directly tended to have a more positive opinion of the editorial and review processes.

“Reviewers seem to lack the required skills and I see many papers that make basic mistakes in methodology, or lack even basic information to be able to reproduce the results,” says Duncan Watson-Parris, an atmospheric physicist who uses machine learning at the Scripps Institution of Oceanography in San Diego, California. The key, he says, is whether journal editors are able to find referees with enough expertise to review the studies.
That can be difficult to do, according to one Japanese respondent who worked in earth sciences but didn’t want to be named. “As an editor, it’s very hard to find reviewers who are familiar both with machine-learning (ML) methods and with the science that ML is applied to,” he wrote.
Nature also asked respondents how concerned they were by seven potential impacts of AI on society which have been widely discussed in the news . The potential for AI to be used to spread misinformation was the most worrying prospect for the researchers, with two-thirds saying they were ‘extremely’ or ‘very’ concerned by it. Automated AI weapons and AI-assisted surveillance were also high up on the list. The least concerning impact was the idea that AI might be an existential threat to humanity — although almost one-fifth of respondents still said they were ‘extremely’ or ‘very’ concerned by this prospect.

Many researchers, however, said AI and LLMs were here to stay. “AI is transformative,” wrote Yury Popov, a specialist in liver disease at the Beth Israel Deaconess Medical Center in Boston, Massachusetts. “We have to focus now on how to make sure it brings more benefit than issues.”
Nature 621 , 672-675 (2023)
doi: https://doi.org/10.1038/d41586-023-02980-0
Updates & Corrections
Correction 10 October 2023 : An earlier version of this story erroneously affiliated Kedar Hippalgaonkar with the National University of Singapore.
Reprints and Permissions
Supplementary Information
- AI survey methodology (docx)
- AI survey questions (pdf)
- AI survey results (xlsx)
Related Articles

- Machine learning
- Mathematics and computing
- Computer science

Hypotheses devised by AI could find ‘blind spots’ in research
Nature Index 17 NOV 23

ChatGPT has entered the classroom: how LLMs could transform education
News Feature 15 NOV 23

Why teachers should explore ChatGPT’s potential — despite the risks
Editorial 15 NOV 23

Disaster early-warning systems are ‘doomed to fail’ — only collective action can plug the gaps
Comment 15 NOV 23
Use space technology to help tackle public-health events
Correspondence 14 NOV 23

The rise of brain-reading technology: what you need to know
News Feature 08 NOV 23
NIHR GOSH BRC 3-year Clinical Training (PhD) Fellowship
Clinical PhD Fellowship for paediatric doctors and wider Healthcare Professionals at the UCL Great Ormond Street Institute of Child Health
London (Greater) (GB)
NIHR GOSH BRC
Job Posting of the School of Optical and Electronic Information, HUST
Job Opportunities: Leading talents, young talents, overseas outstanding young scholars, postdoctoral researchers.
Wuhan, Hubei, China
School of Optical and Electronic Information, Huazhong University of Science and Technology
Artificial Intelligence and Data Science Faculty Positions in the SOE at the Westlake University
We are dedicated to achieving influential innovations in theories and applications of these research fields.
Yungu, Hangzhou, Zhejiang, China
Westlake University
Faculty Positions in School of Engineering, Westlake University
Tenured or tenure-track faculty positions in all ranks. We seek candidates with research interests in multiple areas.
School of Engineering, Westlake University
2023 Recruitment notice Shenzhen Institute of Synthetic Biology: Shenzhen, China
The wide-ranging expertise drawing from technical, engineering or science professions...
Shenzhen,China
Shenzhen Institute of Synthetic Biology
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
- Published: 17 April 2021
Artificial intelligence and machine learning research: towards digital transformation at a global scale
- Akila Sarirete 1 ,
- Zain Balfagih 1 ,
- Tayeb Brahimi 1 ,
- Miltiadis D. Lytras 1 , 2 &
- Anna Visvizi 3 , 4
Journal of Ambient Intelligence and Humanized Computing volume 13 , pages 3319–3321 ( 2022 ) Cite this article
5203 Accesses
9 Citations
Metrics details
Working on a manuscript?
Artificial intelligence (AI) is reshaping how we live, learn, and work. Until recently, AI used to be a fanciful concept, more closely associated with science fiction rather than with anything else. However, driven by unprecedented advances in sophisticated information and communication technology (ICT), AI today is synonymous technological progress already attained and the one yet to come in all spheres of our lives (Chui et al. 2018 ; Lytras et al. 2018 , 2019 ).
Considering that Machine Learning (ML) and AI are apt to reach unforeseen levels of accuracy and efficiency, this special issue sought to promote research on AI and ML seen as functions of data-driven innovation and digital transformation. The combination of expanding ICT-driven capabilities and capacities identifiable across our socio-economic systems along with growing consumer expectations vis-a-vis technology and its value-added for our societies, requires multidisciplinary research and research agenda on AI and ML (Lytras et al. 2021 ; Visvizi et al. 2020 ; Chui et al. 2020 ). Such a research agenda should oscilate around the following five defining issues (Fig. 1 ):

Source: The Authors
An AI-Driven Digital Transformation in all aspects of human activity/
Integration of diverse data-warehouses to unified ecosystems of AI and ML value-based services
Deployment of robust AI and ML processing capabilities for enhanced decision making and generation of value our of data.
Design of innovative novel AI and ML applications for predictive and analytical capabilities
Design of sophisticated AI and ML-enabled intelligence components with critical social impact
Promotion of the Digital Transformation in all the aspects of human activity including business, healthcare, government, commerce, social intelligence etc.
Such development will also have a critical impact on government, policies, regulations and initiatives aiming to interpret the value of the AI-driven digital transformation to the sustainable economic development of our planet. Additionally the disruptive character of AI and ML technology and research will required further research on business models and management of innovation capabilities.
This special issue is based on submissions invited from the 17th Annual Learning and Technology Conference 2019 that was held at Effat University and open call jointly. Several very good submissions were received. All of them were subjected a rigorous peer review process specific to the Ambient Intelligence and Humanized Computing Journal.
A variety of innovative topics are included in the agenda of the published papers in this special issue including topics such as:
Stock market Prediction using Machine learning
Detection of Apple Diseases and Pests based on Multi-Model LSTM-based Convolutional Neural Networks
ML for Searching
Machine Learning for Learning Automata
Entity recognition & Relation Extraction
Intelligent Surveillance Systems
Activity Recognition and K-Means Clustering
Distributed Mobility Management
Review Rating Prediction with Deep Learning
Cybersecurity: Botnet detection with Deep learning
Self-Training methods
Neuro-Fuzzy Inference systems
Fuzzy Controllers
Monarch Butterfly Optimized Control with Robustness Analysis
GMM methods for speaker age and gender classification
Regression methods for Permeability Prediction of Petroleum Reservoirs
Surface EMG Signal Classification
Pattern Mining
Human Activity Recognition in Smart Environments
Teaching–Learning based Optimization Algorithm
Big Data Analytics
Diagnosis based on Event-Driven Processing and Machine Learning for Mobile Healthcare
Over a decade ago, Effat University envisioned a timely platform that brings together educators, researchers and tech enthusiasts under one roof and functions as a fount for creativity and innovation. It was a dream that such platform bridges the existing gap and becomes a leading hub for innovators across disciplines to share their knowledge and exchange novel ideas. It was in 2003 that this dream was realized and the first Learning & Technology Conference was held. Up until today, the conference has covered a variety of cutting-edge themes such as Digital Literacy, Cyber Citizenship, Edutainment, Massive Open Online Courses, and many, many others. The conference has also attracted key, prominent figures in the fields of sciences and technology such as Farouq El Baz from NASA, Queen Rania Al-Abdullah of Jordan, and many others who addressed large, eager-to-learn audiences and inspired many with unique stories.
While emerging innovations, such as Artificial Intelligence technologies, are seen today as promising instruments that could pave our way to the future, these were also the focal points around which fruitful discussions have always taken place here at the L&T. The (AI) was selected for this conference due to its great impact. The Saudi government realized this impact of AI and already started actual steps to invest in AI. It is stated in the Kingdome Vision 2030: "In technology, we will increase our investments in, and lead, the digital economy." Dr. Ahmed Al Theneyan, Deputy Minister of Technology, Industry and Digital Capabilities, stated that: "The Government has invested around USD 3 billion in building the infrastructure so that the country is AI-ready and can become a leader in AI use." Vision 2030 programs also promote innovation in technologies. Another great step that our country made is establishing NEOM city (the model smart city).
Effat University realized this ambition and started working to make it a reality by offering academic programs that support the different sectors needed in such projects. For example, the master program in Energy Engineering was launched four years ago to support the energy sector. Also, the bachelor program of Computer Science has tracks in Artificial Intelligence and Cyber Security which was launched in Fall 2020 semester. Additionally, Energy & Technology and Smart Building Research Centers were established to support innovation in the technology and energy sectors. In general, Effat University works effectively in supporting the KSA to achieve its vision in this time of national transformation by graduating skilled citizen in different fields of technology.
The guest editors would like to take this opportunity to thank all the authors for the efforts they put in the preparation of their manuscripts and for their valuable contributions. We wish to express our deepest gratitude to the referees, who provided instrumental and constructive feedback to the authors. We also extend our sincere thanks and appreciation for the organizing team under the leadership of the Chair of L&T 2019 Conference Steering Committee, Dr. Haifa Jamal Al-Lail, University President, for her support and dedication.
Our sincere thanks go to the Editor-in-Chief for his kind help and support.
Chui KT, Lytras MD, Visvizi A (2018) Energy sustainability in smart cities: artificial intelligence, smart monitoring, and optimization of energy consumption. Energies 11(11):2869
Article Google Scholar
Chui KT, Fung DCL, Lytras MD, Lam TM (2020) Predicting at-risk university students in a virtual learning environment via a machine learning algorithm. Comput Human Behav 107:105584
Lytras MD, Visvizi A, Daniela L, Sarirete A, De Pablos PO (2018) Social networks research for sustainable smart education. Sustainability 10(9):2974
Lytras MD, Visvizi A, Sarirete A (2019) Clustering smart city services: perceptions, expectations, responses. Sustainability 11(6):1669
Lytras MD, Visvizi A, Chopdar PK, Sarirete A, Alhalabi W (2021) Information management in smart cities: turning end users’ views into multi-item scale development, validation, and policy-making recommendations. Int J Inf Manag 56:102146
Visvizi A, Jussila J, Lytras MD, Ijäs M (2020) Tweeting and mining OECD-related microcontent in the post-truth era: A cloud-based app. Comput Human Behav 107:105958
Download references
Author information
Authors and affiliations.
Effat College of Engineering, Effat Energy and Technology Research Center, Effat University, P.O. Box 34689, Jeddah, Saudi Arabia
Akila Sarirete, Zain Balfagih, Tayeb Brahimi & Miltiadis D. Lytras
King Abdulaziz University, Jeddah, 21589, Saudi Arabia
Miltiadis D. Lytras
Effat College of Business, Effat University, P.O. Box 34689, Jeddah, Saudi Arabia
Anna Visvizi
Institute of International Studies (ISM), SGH Warsaw School of Economics, Aleja Niepodległości 162, 02-554, Warsaw, Poland
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Akila Sarirete .
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
About this article
Cite this article.
Sarirete, A., Balfagih, Z., Brahimi, T. et al. Artificial intelligence and machine learning research: towards digital transformation at a global scale. J Ambient Intell Human Comput 13 , 3319–3321 (2022). https://doi.org/10.1007/s12652-021-03168-y
Download citation
Published : 17 April 2021
Issue Date : July 2022
DOI : https://doi.org/10.1007/s12652-021-03168-y
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Find a journal
- Publish with us
> cs > cs.AI
Help | Advanced Search
Artificial Intelligence
Authors and titles for recent submissions.
- Fri, 17 Nov 2023
- Thu, 16 Nov 2023
- Wed, 15 Nov 2023
- Tue, 14 Nov 2023
- Mon, 13 Nov 2023
Fri, 17 Nov 2023 (showing first 25 of 115 entries)
Links to: arXiv , form interface , find , cs , new , 2311 , contact , h elp ( Access key information)
Artificial Intelligence: Past, Present, and Future Research Paper
Introduction, purpose statement.
Bibliography
Artificial intelligence has been the forefront of science and technology for decades. The term artificial intelligence is often misinterpreted as the recreation of human intelligence in machine form. However, it would be apposite to define it as an ability of an artificial system to gather, interpret, and apply data for the achievement of specific goals. 1 A distinct feature of artificial technology is the ability to execute given tasks without human oversight and improve performance, with algorithms allowing it to learn from its mistakes and input data. 2 Therefore, programs and techniques based on artificial intelligence are uniquely positioned to accomplish complex functions in a variety of fields.
The idea of machines possessing and operating with human-like intelligence can be traced to the first half of the 20 th century. In 1942, Isaac Asimov published a short story about robots possessing artificial intelligence and outlined the fundamental laws of robotics. 3 Many scientists were inspired by Asimov’s story, commencing work on intelligence techniques. In 1956, the Rockefeller Foundation founded a workshop on artificial intelligence hosted by Marvin Minsky and John McCarthy, who are considered the fathers of this branch of science. 1956 is considered the birth year of artificial intelligence and the year the term was coined. Early artificial intelligence projects included the 1964 natural language processing program ELIZA and the 1959 General Problem Solver program aimed at the solution of universal problems. 4 The 21 st century produced more intricate artificial intelligence programs and techniques that are utilized in lethal autonomous weapon systems, intelligence, surveillance, and reconnaissance, as well as in medicine, logistics, education, and cyberspace. 5 The latest leaps in artificial intelligence are embodied by quantum computing that allows for faster data gathering and interpretation. 6 Quantum algorithms present a riveting field of research, particularly in intelligence collection and analysis.
Today, defense and intelligence agencies have unprecedented access to artificial intelligence programs and techniques. Although the artificial intelligence used or developed by the U.S. military and by the military-industrial complex is considered to be in its infancy, it can be highly beneficial. 7 Artificial intelligence allows for faster data interpretation, translating into an increased speed of decision-making processes in the military and reaching more objective solutions by mitigating human error. 8 Moreover, the use of artificial intelligence decreases human labor and costs.
Nevertheless, many experts argue that applying artificial intelligence, including techniques based on quantum computing, presents substantial security challenges. Specifically, the speed of artificial intelligence systems creates incentives for opponent states to resort to preemptive actions, leading to escalation of conflict. 9 Moreover, there is an inherent risk of loss of human control over vital decisions if the system disregards data it marked to be inconsequential. For example, failure to specify certain conditions as dangerous can lead to the inability to plan a safe path in autonomous submersibles. Quantum computing, in particular, raises questions pertaining to security and efficiency, with a high potential for algorithm-related data. It should be noted that most resolutions utilized in artificial intelligence are based on heuristic algorithms that may not present suitable solutions. 10 Thus, the supposed benefits of a more efficient decision-making process remain uncertain.
Quantum-based artificial intelligence and its use in the military, including intelligence gathering and interpretation, present an interesting field of research. The potential for mistakes in decisions and solutions proposed by artificial intelligence systems can lead to potentially devastating outcomes that can affect numerous people. Therefore, this paper aims to answer the following research question: how do quantum computing algorithms impact artificial intelligence in intelligence collection and interpretation? What is the potential for error and miscalculation in quantum algorithms?
Quantum technology can be applied in a variety of fields within the military. It can be defined as technology built with the use of quantum-mechanical properties, including quantum entanglement, superposition, and tunneling utilized in separate quantum systems. 11 Thus, quantum warfare is the use of quantum technologies and artificial intelligence in support of the national security, at strategic, tactical, and operational levels, through the employment of highly advanced and efficient gathering and analysis of intelligence. This paper addresses the use of artificial intelligence systems based on quantum technologies in the military, specifically in intelligence data interpretation. Furthermore, its impact on U.S. national security will be assessed, with the paper considering the effect of data interpretation miscalculations on the nation’s ability to defend itself.
The comparison of artificial intelligence systems based on different technologies, including quantum technology, will help elucidate how data is collected, excluded, and evaluated by different systems. Unlike other technology, quantum tech utilizes quantum bits that hold more information than binary digits, thus, processing any data set at an increased speed. 12 This will yield an understanding of how quantum-based artificial intelligence techniques operate and how they can benefit the U.S. intelligence agencies. Therefore, this analysis will help assess whether the investment in quantum technologies by the military, in particular, intelligence agencies, is justified.
Furthermore, assessment of the potential for errors and miscalculations of systems based on quantum technologies will allow evaluating their safety and efficiency. In discussing this question, both technological and ethical aspects of artificial intelligence implementation are to be considered. Such ethical principles as justified and overridable uses of artificial intelligence and human moral responsibility require exploration. 13 The emphasis will be made on the possibility of miscalculations in implementing different algorithms. Nevertheless, the ethical issues arising from such errors should not be divorced from the conversation, as mistakes made by the military have the potential to impact people on the national level. Therefore, the study has two primary purposes:
- To examine the efficiency of artificial intelligence systems based on quantum technologies compared with those not built on quantum technologies.
- To consider the probability of errors in data collection and interpretation and their effect on U.S. national security.
Acampora, Giovanni. “Quantum machine intelligence.” Quantum Machine Intelligence 1, no. 1-2 (2019), 1–3. doi:10.1007/s42484-019-00006-5.
Haenlein, Michael, and Andreas Kaplan. “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence.” California Management Review 61, no. 4 (2019), 5–14.
Hoadley, Daniel S., and Nathan J. Lucas. Artificial Intelligence and National Security . Congressional Research Service, 2018. Web.
Krelina, Michal. “Quantum technology for military applications.” EPJ Quantum Technology 8, no. 1 (2021), 24–77.
Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World . RAND Corporation, 2020. Web.
Taddeo, Mariarosaria, David McNeish, Alexander Blanchard, and Elizabeth Edgar. “Ethical Principles for Artificial Intelligence in National Defence.” Philosophy & Technology 34, no. 4 (2021), 1707-1729.
- 1 Michael Haenlein and Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence,” California Management Review 61, no. 4 (2019): 1.
- 2 Daniel S. Hoadley and Nathan J. Lucas, Artificial Intelligence and National Security , (Congressional Research Service, 2018), Web.
- 3 Haenlein and Kaplan, “A Brief History of Artificial Intelligence,” 2.
- 4 Haenlein and Kaplan, “A Brief History of Artificial Intelligence,” 3.
- 5 Hoadley and Lucas, “Artificial Intelligence and National Security.”
- 6 Giovanni Acampora, “Quantum machine intelligence,” Quantum Machine Intelligence 1, no. 1-2 (2019): 1.
- 7 Hoadley and Lucas, “Artificial Intelligence and National Security.”
- 8 Forrest E. Morgan et al., Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World , (RAND Corporation, 2020), Web.
- 9 Hoadley and Lucas, “Artificial Intelligence and National Security.”
- 10 Michal Krelina, “Quantum technology for military applications,” EPJ Quantum Technology 8, no. 1 (2021): 33.
- 11 Krelina, “Quantum technology for military applications,” 27.
- 12 Krelina, “Quantum technology for military applications,” 29.
- 13 Mariarosaria Taddeo et al., “Ethical Principles for Artificial Intelligence in National Defence,” Philosophy & Technology 34, no. 4 (2021): 1720.
- Chicago (A-D)
- Chicago (N-B)
IvyPanda. (2023, November 18). Artificial Intelligence: Past, Present, and Future. https://ivypanda.com/essays/artificial-intelligence-past-present-and-future/
"Artificial Intelligence: Past, Present, and Future." IvyPanda , 18 Nov. 2023, ivypanda.com/essays/artificial-intelligence-past-present-and-future/.
IvyPanda . (2023) 'Artificial Intelligence: Past, Present, and Future'. 18 November.
IvyPanda . 2023. "Artificial Intelligence: Past, Present, and Future." November 18, 2023. https://ivypanda.com/essays/artificial-intelligence-past-present-and-future/.
1. IvyPanda . "Artificial Intelligence: Past, Present, and Future." November 18, 2023. https://ivypanda.com/essays/artificial-intelligence-past-present-and-future/.
IvyPanda . "Artificial Intelligence: Past, Present, and Future." November 18, 2023. https://ivypanda.com/essays/artificial-intelligence-past-present-and-future/.
- The Idea of Quantum Computing at D-Wave
- Quantum Mechanics: Determinants
- Theoretical Aspects of Quantum Teleportation
- Quantum Learning Effect on Standardized Test Scores
- Principle of Work Quantum Tunneling Effect
- Photon Lifecycle and Electromagnetic Quantum Field
- Quantum Cryptography for Mobile Phones
- The Quantum Bank: Term Definition
- Strategic HRM at Quantum Corporation
- Stephen Jay Gould, Evolution, and Intelligent Design
- The Chinese Room Argument: The World-Famous Experiment
- Artificial Intelligence and Its Impact on Education
- Regularization Techniques in Machine Learning
- Machine Learning: Bias and Variance
- Machine Learning and Regularization Techniques
- Skip to main content
- Keyboard shortcuts for audio player

- LISTEN & FOLLOW
- Apple Podcasts
- Google Podcasts
- Amazon Music
- Amazon Alexa
Your support helps make our show possible and unlocks access to our sponsor-free feed.
Trailblazing computer scientist Fei-Fei Li on human-centered AI

Regina G. Barber
Rachel Carlson

Berly McCoy

The cover of Fei-Fei Li's new memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI. Fei-Fei Li hide caption
The cover of Fei-Fei Li's new memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.
What is the boundary of the universe? What is the beginning of time?
These are the questions that captivated computer scientist Fei-Fei Li as a budding physicist. As she moved through her studies, she began to ask new questions — ones about human and machine intelligence.
Now, Li is best known for her work in artificial intelligence. Her memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI , came out this week. It weaves together her personal narrative with the history and development of AI.
Throughout her career, Li has advocated for "human-centered" AI. To her, this means creating technology inspired by human intelligence and biology, using AI to enhance human capabilities rather than replace them and consider the potential impact on humans when developing new technology.
From physics to vision
Li's journey as a scientist began with physics. She was captivated by the way physicists questioned everything.
While reading works by famous physicists, she saw them asking some new questions – and not just about the atomic world, but about life and intelligence. An internship at the University of California, Berkeley further ignited her interest in the brain. She was intrigued by how layers of connected neurons could result in complex, high-level awareness and perception.
In particular, Li was fascinated by vision.
"Rather than bury us in the innumerable details of light, color and form, vision turns our world into the kind of discrete concepts we can describe with words," she writes in her book.
Li later learned about a field of AI called computer vision , or the way scientists train computers to recognize and respond to objects. It's used for things like self-driving cars and x-rays. Li says the process is inspired by the human visual system – but instead of eyes and retinas, computers use cameras and sensors to capture images and data. Then, they need to make sense of that data.
To achieve this goal, computer scientists use something called a neural network , which Li says is also inspired by the human brain. While the brain's fundamental unit is a neuron, neural networks are made of millions of "nodes" stacked together in layers. Like neurons in the brain, these layers of nodes take in and process that data.
The mystery of machine intelligence
Despite advances in the field, Li says there are still mysteries about how AI learns.
"Now everybody uses powerful AI products like Chat GPT," she says. "But even there, how come it can talk to you in human-like language, but it does stupid errors in math?"
Li says this generation of AI models is trained on data from across the internet, but how all of that data is processed and how models make decisions is still unknown.
To illustrate this point, she rhetorically asks how computers see, "Because what you get in a photo are just lights and colors and shades — yet you read out a cat."
These questions will only continue to grow as the use of AI becomes more widespread and more researchers enter the field.
Keeping AI ethical
Mystery aside, Li says AI can be used for bad or good. In order to ensure it's used for good, she says scientists must commit to exploring potential problems with AI, like bias.
One solution, she thinks, is for society to start coming up with ways to regulate the technology.
"The biggest issue of today's AI is that the technology is developing really fast, but the governance model is still incomplete. And in a way, it's inevitable," she says. "I don't think we ever create governance models before a technology is ready to be governed. That's just not how our society works."
One solution, she says, is to use AI to enhance human work rather than replace it. This is one reason why she founded the Stanford Institute for Human-Centered Artificial Intelligence and why she thinks the future of AI should include both scientists and non-scientists from all disciplines.
"We should put humans in the center of the development, as well as the deployment applications and governance of AI," Li says.
Got science to share? Email us at [email protected] .
Listen to Short Wave on Spotify , Apple Podcasts and Google Podcasts .
Today's episode was produced by Rachel Carlson. It was edited by Berly McCoy. Brit Hanson checked the facts. Patrick Murray was the audio engineer.
- neural network
- self-driving cars
- human brain
- Search for: Toggle Search
Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can.
The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka, which autonomously writes reward algorithms to train bots.
Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.
The Eureka research, published today , includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym , a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse , a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model .
“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” said Anima Anandkumar, senior director of AI research at NVIDIA and an author of the Eureka paper. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”
AI Trains Robots
Eureka-generated reward programs — which enable trial-and-error learning for robots — outperform expert human-written ones on more than 80% of tasks, according to the paper. This leads to an average performance improvement of more than 50% for the bots.
Robot arm taught by Eureka to open a drawer.
The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision.
Using GPU-accelerated simulation in Isaac Gym, Eureka can quickly evaluate the quality of large batches of reward candidates for more efficient training.
Eureka then constructs a summary of the key stats from the training results and instructs the LLM to improve its generation of reward functions. In this way, the AI is self-improving. It’s taught all kinds of robots — quadruped, bipedal, quadrotor, dexterous hands, cobot arms and others — to accomplish all kinds of tasks.
The research paper provides in-depth evaluations of 20 Eureka-trained tasks, based on open-source dexterity benchmarks that require robotic hands to demonstrate a wide range of complex manipulation skills.
The results from nine Isaac Gym environments are showcased in visualizations generated using NVIDIA Omniverse.
Humanoid robot learns a running gait via Eureka.
“Eureka is a unique combination of large language models and NVIDIA GPU-accelerated simulation technologies,” said Linxi “Jim” Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”
It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager , an AI agent built with GPT-4 that can autonomously play Minecraft .
NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
Learn more about Eureka and NVIDIA Research .
NVIDIA websites use cookies to deliver and improve the website experience. See our cookie policy for further details on how we use cookies and how to change your cookie settings.

IMAGES
VIDEO
COMMENTS
It's considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956.
This paper provides the design of a research study focusing on Large Language Model (LLM) artificial intelligence (AI) chatbots placing a focus on cognitive flexibility challenges and ...
Artificial intelligence - Alan Turing, AI Beginnings: The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing. In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol ...
This paper is about examining the history of artificial intelligence from theory to practice and from its rise to fall, highlighting a few major themes and advances. 'Artificial' intelligence The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference on the subject.
This introduction to this special issue discusses artificial intelligence (AI), commonly defined as "a system's ability to interpret external data correctly, to learn from such data, and to use tho...
The field of artificial intelligence research was founded as an academic discipline in 1956. ... all of whom would create important programs during the first decades of AI research. ... (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council; Lucas, John ...
... The field of study that describes the ability of machines to learn just like humans can be defined as Artificial Intelligence (AI). Since it was market driven, the fields of technology and...
The field of artificial intelligence (AI)—a term first used in the 1950s—has gone through multiple waves of advancement over the subsequent decades. Today, AI can broadly be thought ... Congressional Research Service 1 Introduction Artificial intelligence (AI)—a term first used in the 1950s—can broadly be thought of as ...
open access Highlights Provides a state-of-the-art of AI research in Information Systems between 2005 and 2020. Identifies the evolution of how AI is defined over a 15-year period. Synthesises and categorises the reported business value of AI. Analysis and categorises the contributions of 55 primary papers. Abstract
His paper was followed in 1952 by the Hodgkin-Huxley […] A Brief History of Artificial Intelligence By Keith D. Foote on January 17, 2022 March 18, 2022. ... artificial intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called ... The First AI Winter.
The concept of AI has existed for decades, with the term first being coined in the 1950s, ... AEA Papers and Proceedings, vol. 108 (May 1, 2018), ... The term artificial intelligence was coined at the Dartmouth Summer Research Project on Artificial Intelligence, a conference proposed in 1955 and held the following year.6 Since that
The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.
... Artificial intelligence (AI) is a technological system that emulates human intellect to carry out cognitive functions, such as learning and problem-solving, using predetermined rules and...
This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships.
Jeffrey M. Perkel Illustration by Acapulco Studio Artificial-intelligence (AI) tools are becoming increasingly common in science, and many scientists anticipate that they will soon be central...
First, though the origins of artificial intelligence are broadly in the field of computer science, and its early commercial applications have been in relatively narrow domains such as robotics, the learning algorithms that are now being developed 2 suggest that artificial intelligence may ultimately have applications across a very wide range.
"Best Paper Award Second Prize" ICGECD 2020 -2nd International Conference on General Education and Contemporary Development, October 23-24, 2020 with our research paper Artificial intelligence ...
(1) Integration of diverse data-warehouses to unified ecosystems of AI and ML value-based services (2) Deployment of robust AI and ML processing capabilities for enhanced decision making and generation of value our of data. (3) Design of innovative novel AI and ML applications for predictive and analytical capabilities (4)
Qualitative case-based reasoning and learning. Thiago Pedro Donadon Homem, Paulo Eduardo Santos and 3 more June 2020 Volume 283. The journal of Artificial Intelligence (AIJ) welcomes papers on broad aspects of AI that constitute advances in the overall field including, but not limited ….
Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an ...
Cooperative AI via Decentralized Commitment Devices. Xinyuan Sun, Davide Crapis, Matt Stephenson, Barnabé Monnot, Thomas Thiery, Jonathan Passerat-Palmbach. Subjects: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA)
In 1956, the Rockefeller Foundation founded a workshop on artificial intelligence hosted by Marvin Minsky and John McCarthy, who are considered the fathers of this branch of science. 1956 is considered the birth year of artificial intelligence and the year the term was coined.
Now, Li is best known for her work in artificial intelligence. Her memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, came out this week. It weaves together her ...
June 2013 Nimish Kumar Artificial Intelligence (AI) is the branch of computer science concerned with building machines (computers) that simulate human behavior. The term was coined in 1956 by...
Abstract. We conduct the first randomized controlled trial of AI assistance's effect on human legal analysis. We randomly assigned sixty students at the University of Minnesota Law School each to complete four separate legal tasks (drafting a complaint, a contract, a section of an employee handbook, and a client memo), either with or without the assistance of GPT-4, after receiving training ...
Using the 2010, 2015, and 2020/2021 datasets of the IMF's Central Bank Legislation Database (CBLD), we explore artificial intelligence (AI) and machine learning (ML) approaches to analyzing patterns in central bank legislation. Our findings highlight that: (i) a simple Naïve Bayes algorithm can link CBLD search categories with a significant and increasing level of accuracy to specific ...
Research Rabbit also allows visualising the scholarly network of papers and co-authorships in graphs, so that users can follow the work of a single topic or author and dive deeper into their ...
NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images. Collecting data to feed a NeRF is a bit like being a red carpet photographer trying to capture a celebrity's outfit from every angle — the neural network requires a few dozen images taken from multiple positions around the scene ...
Robot arm taught by Eureka to open a drawer. The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn't require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer's vision.