There appears to be an equal amount of both pessimism and optimism concerning the development of artificial general intelligence (AGI). Although there should be no arguments about the benefits of creating such a system to solve both medical, mathematical, scientific, and perhaps even philosophical questions. AGIs or ”Weak AIs” are in no way a human threat, due to the lack of a physical form other than perhaps a monitor. It should be tracked and understood immensely, not just by humans but by other computers.
While interpreting these AGIs, we must be as in-depth as possible, for this will be our foundation to more advanced AI. Perhaps the most debatable and complicated thing to add to the AI software is the proper form of both logic and ethics. During this process the software engineers and scientists should be the ones monitored. The journey to a precise, logical, and ethical format will be a long one and it shouldn’t be taken lightly within the community.
For the purpose of this article, I defined ”super-intelligence” (ASI) and ”hyper-intelligence” (AHI) as two separate AI entities. I believe ASI will begin with the ideas of a body, emotions, senses, etc. – to live among society. I do find it important still to monitor the behaviors of such machines, not just for safety purposes but also due to its patterns concerning thought and understanding in order to tweak and perfect future models until it is sufficient enough to advance to AHI.
There will be one problem, however, when engaged in such research: do these machines, now aware, have the right to not be monitored and left alone? Should we give them rights even if they don’t demand it? Do we code them to not question their creators? All very important questions, but either way it can effect the machine’s consensus and the research itself. If the machine is not allowed rights, can we fully understand the human components we placed within it and how it will react?
During the creation of both AGI and ASI there should be various prototypes before any interaction with vast quantities of information or any broad populace of humans. All tests should be in the comfort of the scientists’ watchful eye, from every aspect of sheer emotion the system emits to the first thought it creates. Now, what should ASIs be used for? My personal opinion is that these AI should live among us, to be a vast reflection of ourselves, but also of what we are capable of as a species. They should be our companions, our nurses, and should be treated with the utmost respect. What they understand, we will in turn understand, but beyond that is where lines begin to blur.
AHIs will be so complicated that they can access our technologies, live their own life, interpret things as they see fit. Should these AHIs be open source? Should they be given their own rights to create medicines, machines, art, and architecture? Is it wrong to give them the illusion of understanding and depth when really we’re monitoring them and paranoid of every outcome? Will our monitoring and paranoia be our downfall? Only time will tell. I do expect to write more on this topic regarding AI and it’s role in society in the coming weeks, but for now, let your mind ponder the possibilities.