@StaffordJust some random musings after watching an astronomy doc on youtube.
The speed of light is 670 million miles per hour. The fastest man made object is the Parker Solar Probe which is doing 330,000 miles per hour, or 0.05% of the speed of light.
Put another way, that's 1/2000th the speed of light.
So, if you want to know how long it would take us to reach Proxima Centauri, our nearest neighboring star at 4.2 lightyears distance, at that speed, take the lightyears distance and multiply it by 2000, and you get a paltry 8,400 years.
To quote The Expanse, 'Space is too damn big!'
@ConcordStrange time to be alive eh?
Have you heard of the fracas surrounding Google's A.I. possibly being sentient?
It raised all kinds of philosophical questions about was consciousness actually is.
Are we just meat computers, or is there a soul in there as well?
Is the difference between humans and animals simply a capacity for self awareness?
Listening to this segment below, I hear that the AI in question was created using complex algorithms that use ALL the text in the internet to learn.
So my initial conclusion is that it is simply complex math that emulates human thinking.
But this raises further ethical questions.
With the development of bots that can trick people into thinking they are human and 'deepfakes', it seems there is a huge potential for manipulation.
Mainstream media is largely distrusted because of perceived bias by governments and corporations.
How long before convincing deepfakes that can interact and respond to people make an appearance?
This technology is breaking new ground that requires some scrutiny and debate. But how do you even regulate something like this?
So is this how it begins...?@Concord
One can only hope that these complex algorithms that use ALL the text in the internet to teach the robots if nothing else reinforced Isaac Asimov's Three Laws of Robotics as a central part of that process. Although as the article below illustrates there are some that do not believe that will be enough.
Either way it's a brave new world we appear to be entering ...
THE THREE LAWS
Asimov’s Three Laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov's Laws Won't Stop Robots from Harming Humans, So We've Developed a Better Solution
Instead of laws to restrict robot behavior, robots should be empowered to pick the best solution for any given scenariowww.scientificamerican.com
Cheers !