Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Machine learning should increase human possibilities

Source: socialeurope.eu

Butollo: Artificial intelligence is said to deliver answers on questions such as the right levels of taxation, reasonable urban planning, the management of companies and the assessment of job candidates. Are the abilities of AI to predict and judge better than those of humans? Does the availability of huge amounts of data mean that the world becomes more predictable?

Esposito: Algorithms can process incomparably more data and perform certain tasks more accurately and reliably than human beings. This is a great advantage that we must keep in mind also when we highlight their limits, which are there and are fundamental. The most obvious is the tendency of algorithms, which learn from available data, to predict the future by projecting forward the structures of the present—including biases and imbalances.

This also produces problems like overfitting, which arises when the system is overly adapted to the examples from the past and loses the ability to capture the empirical variety of the world. For example, it learned so well to interact with the right-handed users it has been trained with that it does not recognise a left-handed person as a possible user.

Algorithms also suffer a specific blindness, especially with regard to the circularity by which predictions affect the future they are aimed to forecast. In many cases the future predicted by the models does not come about, not because they are wrong but precisely because they are right and are followed.

Think, for example, of traffic flow forecasts in the summer for the so-called smart departures: black, red, yellow days, etc. The models predict that on July 31st at noon there will be traffic jams on highways, while at 2 am one will travel better. If we follow the forecasts, which are reliable and well done, we will all be queuing up on the highway at 2 am—contradicting the prediction.

This circularity affects all forecasting models: if you follow the forecast you risk falsifying it. It is difficult to predict surprises and relying too much on algorithmic forms risks limiting the space of invention and the openness of the future.

Do you see political dangers in relying too much on AI? Is the current hype around the subject a sign of the loss of our sovereignty as societies?

The political dangers are there, but they are not determined directly by technology. The possibilities offered by algorithms can lead to very different political outcomes and risks—from the hype about personalisation promising to unfold the autonomy of individual users to the Chinese ‘social credit’ system, which goes in the opposite direction.

What are your recommendations for using AI in the right way? What should policy-makers consider when formulating ethical guidelines, norms and regulations with this in mind?

Heinz von Foerster had as ethical imperative ‘Act always so as to increase the number of possibilities’. Today more than ever it seems to me a fundamental principle. Especially when we are dealing with very complex conditions, I think it is better to try to learn continuously from current developments than to pretend to know where you want to go.

And incidentally, machine-learning algorithms also work in this way. In these advanced-programming techniques algorithms learn from experience and in a way programme themselves—going in directions that the designers themselves often could not predict.

What is a reasonable expectation of AI? What can we hope for and how can we get there?

What I expect with respect to AI is that the very idea to artificially reproduce human intelligence will be abandoned. The most recent algorithms that use machine learning and big data do not work at all like human intelligence and do not even try to emulate it—and precisely for this reason they are able to perform with great effectiveness tasks that until now were reserved for human intelligence.

Through big data, algorithms ‘feed’ on the differences generated (consciously or unconsciously) by individuals and their behaviour to produce new, surprising and potentially instructive information. Algorithmic processes start from the intelligence of users to operate competently as communication partners, with no need to be intelligent themselves.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence