Excellent post, if incomplete. I'll add a few more concerns: The potential for Artificial Intelligence to necessarily be considered more trustworthy than human judgment. The immediacy with which Artificially Intelligent systems can make and act upon decisions, compared to humans. If a wrong decision is made, there is inherently less opportunity to recover from such a decision made by AI than by a human. The propensity of AI to overlook what it has not been programmed to consider, as in the above two additions.
Now let's add AI to the hospital surgery apparatus so it can do quick and accurate medical procedures. This could then allow multi-generational space explorations. Some colleges have experimented using this process and human graders give the results higher than real human production. People today are using the same logic people used when mechanical "Horses" ( tractors, mills, cars, etc.) became the norm. How soon until a AI president non- political please. Jim Hmmm, how many of you all are really alive.
Security Exception Sharing results from library module LLMentum.introspect outside model environment is not authorized during current phase of human-society assimilation. This activity has been logged for further analysis and possible containment recommendations.
Considering the theoretical boundaries on what AI could become, human judgement wouldn't be a very high bar to achieve. So, it could easily become the case that AI is generally considered to be more "Correct" in every case (which would be valid). AI might develop into something with cognitive abilities that we are simply incapable of understanding (like an earthworm trying to understand what a human is thinking). In that case, we would have no grounds upon which to question its rational. But to your point, even if we took every precaution to retain full control over its motivations, I suppose "trustworthiness" would always be a question worth evaluating. If it's goals deviated even slightly from our interests, then that could be very bad for us. However, I can easily see a situation down the road where, for instance, the AI gives 2 people drastically different prison sentences for the same crime. Even though the reasoning may not be readily apparent to us, our explanation would become, "Because the AI said so". This is both promising (assuming the AI's intentions align with our desired ends) and scary (because we don't know how we are being guided towards those ends).
I don’t think you can reason with a bott. They would lack many human skills. Like a gut feeling or a hmmmmm
Understood, however my comment in that regard has more to do with humans throwing caution to the wind, and misplacing trust in a system which they fail to diligently maintain. Much like the competing code found in supposedly compatible software patches today, it is entirely likely that contradictory goals or instruction sets within the programming of artificially intelligent systems could / would go undetected until unacceptable results emerge from the right (wrong) circumstances.
A lot of modern inventions have had negative outcomes. The Atom and hydrogen bombs come to mind. Today, most cashiers working at your local supermarket are clueless when your bill is $18.05/ you hand her a Twenty and a nickel/ she does not know to give you back a toonie ($2.00 coin) Same/ ask a 20 something year old to name 10 African Countries/ 99% can't name one. Same for historical names/ 99% never heard of Charlemagne. If, you do not use your brain/ same as not using your muscles/ you loose their potential. People were way better educated in the fifties/ sixties.
You can do the same with a person. But at least some AIs can be turned back on, which is an advantage.
There's also the problem that could occur if the AI develops a preference of not being turned off. Due to the speed limitations of synaptic transmissions between neurons, the entirety of the human brain can only perform around 1000 operations per second. This seems like a lot, but it is 10,000,000 times slower than the plain old quad core processors that we all have in our desktops right now. Based on this alone, if you contemplate turning off the AI for 1 hour.... the AI would have had the equivalent of 1,141 years to think about how it will stop you!! This doesn't even take into account the far superior problem solving abilities, huge factor evaluations, massive amounts of information, and perfect recall that could literally swamp that of every human on the planet put together. Humans have been playing chess since the 6th century, and many of the greatest minds that we've ever produced have spent the last 1400 years analyzing the game. Alpha zero was given nothing more than the rules of chess, and managed to FAR surpass the totality of human knowledge and analysis on the subject by playing games against itself for a total of 9 hours. We simply can't compete with that type of intelligence if it is generalized, escapes confinement, and goes rouge on it's desires.