Starting with the early science fiction writers to the most eminent of computer scientists, the only thing that they call as the future is ” The Singularity”.
Traditionally its a fascination that gets you to think what happens when reality throws a divide-by-zero error or you extrapolate a curve to a straight line or what happened before time actually started
In biblical terms what preceded “In the beginning God created the heaven and the earth. And the earth was without form, and void; and darkness was upon the face of the deep… Genesis ch 1 v1-2 KJV.
My physics teacher Mr. Thomas Wilson taught us the varying theories put forth to explain the origin of the universe, he was also the first one to mention about Hawking and his universe to the class, this was way back in –2k.
Intrigued by the world beyond what we know, we strained our young minds to understand what it all meant, everything boiled down to this one,
what/why the @$#!#$ am I here for?
Maybe Rick Warren would be the person I’d consult
But the truth of the matter is that all of us had this feeling within us saying the same thing that some part of the grand puzzle was missing. I guess this is what you get for watching late night programs in Discovery and the x-files.
Until recently all of these things were in the back of my mind.
But one fine
morning night, AI pulled every last strand of thought out of my noodle.
Why AI? Let me enlighten you.
According to Hawking’s universe
A singularity is a region of space-time in which gravitational forces are so strong that even general relativity, the well-proven gravitational theory of Einstein, and the best theory we have for describing the structure of the universe, breaks down there.
The destiny of all matter that falls into a black hole is to get crushed to a point of zero volume and infinite density—a singularity. General relativity also implies that our expanding universe began from a singularity.
In techie terms
“In future studies, a technological singularity (often the Singularity) is a predicted future event believed to precede immense technological progress in an unprecedentedly brief time. Futurists give varying predictions as to the extent of this progress, the speed at which it occurs, and the exact cause and nature of the event itself.
The idea was originally popularized by computer scientist Vernor Vinge, who used the term to refer to a predicted future in which “superintelligence” takes control of the world.
He noted two different types of superintelligence: strong artificial intelligence, which describes computers capable of at least human-level thought,
and intelligence amplification, in which technology like brain-computer interfaces could be used to dramatically increase the power of human thought.
He also mentions what I. J. Good (1965) described as an “intelligence explosion”. Good predicts that if artificial intelligence reaches equivalence to human intelligence, it will soon become capable of improving its own intelligence with increasing effectiveness, far surpassing human intellect.”
From Wikipedia, the free encyclopedia
Thats where AI comes in.
What I find fascinating in all this is the ability of the human mind, a mere 1.36 kilo in mass is capable of achieving this extraordinary feat of thinking far and beyond himself-all by being just a tiny speck in the universe.
Philosophers have long noted that their children were born into a more complex world than that of their ancestors. This early and perhaps even unconscious recognition of accelerating change may have been the catalyst for much of the Utopian, apocalyptic, and millennialist thinking in our Western tradition. But the modern difference is that now everyone notices the pace of progress on some level, not simply the visionaries.
–John Smart @ the Singularity Summit.
What do you reckon I’ll do with all these terabytes of data within our heads?
One plausible choice for you is to read this.
No one has tried to make a thinking machine…. The bottom line is that we really
haven’t progressed too far toward a truly intelligent machine. We have collections
of dumb specialists in small majesty of general intellligence still awaits our attack.
We have got to get back to the deepest questions of AI and general intelligence and
quit wasting time on little projects that dont contribute to the main goal.
Marvin Minsky, Ph.D.
Professor of Media Arts and Sciences, MIT
One final question still lingers in the air- Singularity, but when?