Furio Oldani
The European Union defines artificial intelligence, in the acronym “Ai” or “Ia” depending on whether the terminology is Anglo-Saxon or Italian, “the ability of a machine to demonstrate human capabilities such as reasoning, learning, planning and creativity.”
AI therefore allows systems to perceive the surrounding environments, to relate to them and to act to grasp specific objectives, solving any problems that may arise. The systems operating on the basis of Ai analysis are in practice capable of adapting their own behavior and therefore that of the hardware they drive, analyzing the effects of previous actions and working autonomously. They should therefore respond flexibly to the inputs they receive from the outside by adapting to them, unlike traditional software which instead makes the systems controlled by them operate in a rigid and immutable manner. It is clear at this point that the ability of a machine to operate on the basis of artificial intelligence programs represents a significant technological plus and, as such, also “sellable” to the end user.
Precisely for this reason in Eima many advertising brochures proposed their respective top-of-the-range operational solutions as means capable of operating autonomously based on artificial intelligence software. In reality it was not always like this, in the sense that the above machines often operated autonomously, but on the basis of traditional software that could not be characterized as artificial intelligence programs in the strict sense.
There is also the “weak” Ai
This also applies in the case of particularly advanced and complex software, programs that are similar to those of artificial intelligence, but without being such and technically characterized as “Weak Ai” programs, in Italian “Weak artificial intelligence”. The risk that the acronym “Ai” could only be an advertising promotional motif is therefore real, also because the boundary between an advanced software and an artificial intelligence program is increasingly blurred and difficult to identify. A problem that will however be overcome in the future as AI will end up taking over.
The first research in the 1950
Studies on artificial intelligence date back to the 1950s and the works of the English mathematician Alan Mathison Turing, followed in the 1960s by those of the American computer scientist John McCarthy. It was the latter who developed the programming language “Lisp” which became a fundamental tool for subsequent AI research. In the following two decades, studies then concentrated on problems relating to automatic learning processes and the development of artificial neural networks, a topic to which the work of computer scientists Geoffrey Hinton, Yann LeCun and Yoshua Bengio, respectively born in England, gave a strong impetus. , France and Canada. In the 2000s, thanks to the exponential growth of computer computing power, machine learning and the development of so-called “decision trees” made giant strides.
They allowed us to tackle complex problems such as, just to name a few, image recognition, automatic translation and personalized recommendations. These operational possibilities have in turn given rise to further developments that are revolutionizing many sectors including medicine, industry, finance and automation, but not without giving rise to ethical and social problems fundamentally linked to the protection of privacy and the impact that these technologies have on employment. Artificial intelligence is therefore opening up a future full of promises, but it requires conscious and responsible management
There are many open challenges
Artificial Intelligence has all the potential to modify the methods and timing of production cycles. An opportunity that was the theme of a meeting organized during Eima to explore the development prospects of AI in the field of agricultural mechanization. Without prejudice to the fact that “true” artificial intelligence based on advanced self-learning architectures, capable of closely simulating the functioning of the human mind, is still a long way off. In the immediate future, agriculture 4.0 systems will still dominate, with increasingly sophisticated digital solutions aimed at collecting large quantities of data in real time to support users in making informed decisions on the management of resources and operational cycles . The collection, integration and processing of data will therefore anticipate the development of advanced artificial intelligence, based on self-learning systems and algorithms capable of processing information and generating decisions without using predetermined mathematical or statistical models. But training algorithms so that they can learn from various environmental situations and respond appropriately, without human supervision, involves the use of enormous amounts of data.which today, in addition to being difficult to store and process, are often divided into subsystems that do not communicate with each other. The path to get closer to “real” artificial intelligence is therefore still very long and complex and the steps to be taken in order to aspire to open a door towards the birth of this technology will depend on the ability to first give life to hardware architectures equipped with greater computational performance.
Title: Artificial intelligence, all that glitters is not gold
Translation with Google