Read the text carefully.
Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and
optimism. OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they
take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable
outputs – such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the
horizon of artificial general intelligence – that long-prophesied moment when mechanical minds surpass human brains not only
quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic
creativity besides every other distinctively human faculty. Whereas that day may come, one should be allowed its dawn is not
yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The human mind
is a surprisingly efficient system that operates with small amounts of information. Of course, any human-style explanation is
not necessarily correct; we are fallible. Hence this is part of what it means to think: to be right, it must be possible to be wrong.
(Available: The New York Times- March 8, 2023. Opinion-Guest essay. Adapted.)
The highlighted linking words respectively introduce:
-
-
-
-