Wikipedia:Reference desk/Archives/Science/2015 June 28

From Wikipedia, the free encyclopedia
Science desk
< June 27 << May | June | Jul >> June 29 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 28[edit]

Rhizophoraceae, could it be narrowed down taxonomically?[edit]

Hi Ref Desk,

Which kind of Rhizophoraceae?

I just found some photos I could stitch to a panorama of mangroves I took 5 years ago at Anse du Canal near Petit-Canal on Grande-Terre, Guadeloupe, Lesser Antilles. The family of mangrove plants is the Rhizophoraceae, but I was wondering if the identification could be further narrowed down to the genus level or perhaps even species level based on the photo alone and the location? -- Slaunger (talk) 07:12, 28 June 2015 (UTC)[reply]

Great photo! Using the photo for species level ID would require an expertise I don't think we have here. Fortunately, the specific location will help you rule out several genera. Just a note, Mangrove#Taxonomy_and_evolution lists several other families in addition to Rhizophoraceae that also have species known as mangroves in them, you might want to browse through those also. SemanticMantis (talk) 14:35, 29 June 2015 (UTC)[reply]
Resolved Thanks, SemanticMantis. I decided to search for scientific papers dealing with mangroves in Guadeloupe and found a researcher (Daniel Imbert, Université des Antilles et de Guyane.), whom I asked by email for identification. He had no doubt it was red mangrove (Rhizophora mangle). -- Slaunger (talk) 18:41, 30 June 2015 (UTC)[reply]

How can one induce sleep paralysis?[edit]

I read the article on sleep paralysis but it doesn't say how to induce it. I'd really like to experience it for myself. Thanks — Preceding unsigned comment added by 117.169.218.7 (talk) 11:59, 28 June 2015 (UTC)[reply]

You can't induce it, there are just certain things that make it more likely [1]. Mikenorton (talk) 12:22, 28 June 2015 (UTC)[reply]
Have your doctor strap you into your bed. That should do the trick. ←Baseball Bugs What's up, Doc? carrots→ 15:32, 28 June 2015 (UTC)[reply]

Why did birds survive the extinction of other dinosaurs?[edit]

It seems to be generally accepted that modern birds are descended from the dinosaurs, I'm left wondering why specifically the birds survived the dinosaur extinction event? It seems very surprising that no other kinds of small dinosaur made it through the mass extinction alongside the birds, reptiles and mammals that seemingly had similar lifestyles, body size and diet to small non-avian dinosaurs.

What feature of birds enabled them to get through the catastrophe?

SteveBaker (talk) 14:50, 28 June 2015 (UTC)[reply]

I asked a similar question here, about how birds were able to survive at all:

"My answer to a related question about aquatic dinosaurs may help, I at least point to a Radio Lab episode where they discuss the KT impact with geologists. Basically, anything that was on the surface of the Earth got cooked, but anything under a few inches of soil would be ok. This favored small mammals, birds, and reptiles that nested underground. While modern birds don't typically bury eggs, ancient birds may have, to hide them from predators. – user137 Nov 18 '14 at 20:52" Count Iblis (talk) 15:53, 28 June 2015 (UTC)[reply]

Sure, but surely there were small dinosaurs (comparable in size and lifestyle to mammals and reptiles) that weren't adapted to flight at the time? Seems unlikely that ALL small dinosaurs were birds...and clearly you didn't need any of the special bird adaptations such as flight or egg-laying in order to survive. So if birds could survive - then why no tiny dinosaurs or other kinds? SteveBaker (talk) 17:16, 28 June 2015 (UTC)[reply]
This is probably related to bird's capability of flight. Ruslik_Zero 17:35, 28 June 2015 (UTC)[reply]
But the mammals and reptiles that survived couldn't fly. Why did the dinosaurs need to be able to fly in order to survive? SteveBaker (talk) 19:15, 28 June 2015 (UTC)[reply]

Wasn't it the less the case that birds survived as such, but that birds later evolved from the small number of dinosaur species (or one species even?) that managed to survive the extinction? --87.112.205.195 (talk) 17:38, 28 June 2015 (UTC)[reply]

No, that was my first thought too - but when I was trying to figure this out for myself, I discovered that the evolution of birds happened long before the extinction event. Microraptor was around 120 to 125 million years ago and was capable of powered flight and looked a lot like a modern bird (except for the small matter of having four wings!). But the rest of the dinosaurs didn't go extinct until 66 million years ago - so birds had been evolving for 55 to 60 million years before the rest of the dinosaurs vanished. SteveBaker (talk) 19:14, 28 June 2015 (UTC)[reply]

it might be that a few kinds could fly higher that other dinosaurs so they survived

We don't know. We lack a complete understanding of the events of the impact, or of the ecology that existed before and after the event. We do know that at least some mammals and birds survived, though far from all of either. We also know that non-avian dinosaurs went extinct, but a variety of other reptiles (e.g. alligators, turtles) survived. Some of the differences is undoubtedly due to adaptation, lifestyle, and flexibility, but some of the difference between who died and who survived is also probably down to dumb luck. Imagine killing 99% of all individual animals alive and then seeing which survivors manage to form successful breeding populations, not an easy thing if your species is large and sparsely populated. It also isn't very clear how much differentiation already existed prior to the event. For example, if small burrowing mammals were already more successful than small dinosaurs in occupying the burrowing niches, there may have been few or no dinosaurs designed for hiding underground during the event. Cretaceous–Paleogene extinction event provides some detail on the speculations about the role that traits like small size, burrowing, swimming, ectothermy v. endothermy, etc. may have played in favoring some groups over others. Dragons flight (talk) 22:38, 28 June 2015 (UTC)[reply]

Agreed we don't (and likely can't) know for sure. Paleoecology is hard stuff. It's probably not even deterministic - when you get to low population levels after Disturbance_(ecology) , things like allee effects and founder effects pose a serious challenge to any kind of long-term, deterministic prediction of how competition will sort out the community ecology of a certain clade. It's not so much about who survived the first few years after impact, but also about who was able to have positive growth rates in the new world - it's not just existence, but species coexistence in the community context that matters. There's also the extremely likelihood of multiple basins of attraction in the population processes, known as Alternative stable states in ecology. Even in homogenous worlds, we can see evidence of emergent endogenous heterogeneity and non-deterministic competitive outcomes over time. And of course the real world has lots of heterogeneity that further complicates deterministic prediction.
That being said, there is indeed speculation on why things worked out the way they did, though it doesn't always get published in reputable journals. This [2] is a nice write up of the specifics of kill mechanisms. This [3] is a nice general perspective piece on extinction vulnerability, with some discussion of K/T. This one [4] might be your best bet at getting a view of how morphology is thought to play into the sauropod/avian evolution around the K/T. Incidentally, there is still at least some debate over when the avian radiation occurred, and how many birds or birdlike things survived. These two papers seem to be part of the emerging consensus that avian radiation was well underway before the K/T extinction [5] [6]. If you believe that there were many birds around before the asteroid hit, then they would have already gone through several iterations of Niche_differentiation, and that by itself is enough to suggest that most extinction mechanisms would have a selective effect, even though it doesn't tell us exactly why any one clade was selected. SemanticMantis (talk) 15:04, 29 June 2015 (UTC)[reply]
Some good stuff above; I would only caution/reaffirm that a) it's honestly unknown at this point b) palaeontology is undergoing huge shifts in understanding and has been for the last few decades, and c) the dividing lines between birds and dinos, such as they exist right now, are being routinely re-written. Experienced dino guys, like Robert Bakker, have asked similar questions about other groups of animals: what catastrophe is big enough to kill off all the dinosaurs but too small to kill off the frogs? We know using present day data that frogs and other amphibians are great indicators of the health of an ecosystem - they don't tolerate much change - yet they apparently had no problem with the global catastrophes we associate with the KT extinction. Matt Deres (talk) 19:22, 29 June 2015 (UTC)[reply]

Memory question[edit]

During the day when you do something you typically remember doing it. Can you tell me what chemical is released into the brain at night that prevents you from remembering your dreams? Also, how is it so fast acting that one can wake for a few moments in the middle of the night and remember doing so, but not remember the dreams either side of the awakening? — Preceding unsigned comment added by 183.217.239.165 (talk) 17:20, 28 June 2015 (UTC)[reply]

I've read somewhere that this has to do with the way memories are stored in the brain. When you dream, your memory works differently, similar to how it works in animals and small children. At the age of about 5 the memory starts to work differently due to the development of language. When the vocabulary of the child becomes rich enough, memories are filed in the brain using abstract language more instead of the primary experience. It has been found that events in early childhood could be clearly remembered by children until about that age of about 5 and then all of a sudden the child will fail to be able to recollect the event. This is then purely a matter of the brain starting to use a different filing system to store and recollect memories, old memories filed according to the old system then become untraceable.
Now, when we dream, we revert back to using the old system for memory again, because in dreams we typically don't use language a lot. Then, when you wake, your brain will use the usual language based system form memory recollection, and then you'll have difficulties remembering the dream. You can also notice this effect when you remember another dream inside a dream, or memories from early childhood inside a dream. Count Iblis (talk) 17:53, 28 June 2015 (UTC)[reply]
Another point of view (because this is an area of active research), dreams are not transferred from short term to long term memory. Short term memory lasts a few minutes. If you talk about your dream as soon as you awake, you can remember what you said, but you will forget the dream. 75.139.70.50 (talk) 19:34, 28 June 2015 (UTC)[reply]
Who says so? ←Baseball Bugs What's up, Doc? carrots→ 19:55, 28 June 2015 (UTC)[reply]
First, to summarize... Norepinephrine appears to be required to move memories from short-term to long-term memory. I have not seen any studies that satisfactorily explain the full mechanism. It appears to be more complicated than just triggering the hippocampus to move the memories - though that is certainly used because people with cancer or damage to the hippocampus fail to move memories into long-term memory. It appears, across many VERY easy to find studies in psychiatric journals, that the prefontal cortex is also used to decide which memories the hippocamus should move and norepinephrine is used in the process. While sleeping, levels of norepinephrine drop. Without norepinephrine, memories are not moved from short-term memory to long-term memory. When you wake up, norepinephrine levels rise and your ability start remembering things returns. So, WHO says so: Try Fricke and Mitchison from Berkeley, Crick and Kotch from CalTech, Hartmann from Tufts, and pretty much anything in the Journal of Sleep Research.
As for your comment below that you can "remember" a dream by concentrating on it - that falls into false memories. There are countless experiments that expose how false memories work. I recently saw a television program in which a "crime" took place and witnesses were questioned. They were asked to concentrate and remember the event. They convinced themselves that they were remembering the criminal's jacket, hat, and shoes (he wasn't wearing a hat). They remembered the color of the victim's dress (she had on a shirt and slacks). As I mentioned above, if you think about a dream as soon as you wake, you can remember thinking about the dream and move those thoughts to long-term memory. However, you will forget the actual dream. With false memories, you can fill in all the gaps in your memory and convince yourself that you have remembered your dream, but you are actually just remembering thinking about the dream - not the dream itself. 199.15.144.250 (talk) 11:55, 29 June 2015 (UTC)[reply]
The TV series "Brain Games" often has segments showing how unreliable and incomplete witness memories can be. And I should clarify that I usually don't remember an entire dream, just the parts that made a strong impression; at the same time realizing there was more to it, but it's gone. ←Baseball Bugs What's up, Doc? carrots→ 14:59, 29 June 2015 (UTC)[reply]
Sleep study experts would likely tell you that you are remembering the impression that the dream had on you and not the dream itself. Another way I've seen it described is that you are vividly remembering your reaction to the dream that you had once you woke, but you are not remembering the dream itself. Your reaction likely contains short-term memories of the dream, which then are transferred to long-term memory because they are part of the reaction. In the end, there is a muddy definition of "remembering the dream." Is remembering a memory of the dream the same as remembering the dream? For the scientific studies done at the sleep lab in our hospital, memories of memories don't count as remembering the dream. Side note: study of dream recall is getting more funding now that it is moving into general studies of memory storage and recall. If you are really interested in this area and have a related degree, check with your local sleep lab. There will likely be opportunities for research. 209.149.113.185 (talk) 16:01, 29 June 2015 (UTC) — Preceding unsigned comment added by Baseball Bugs (talkcontribs) [reply]
I'd like to see a citation for the claim that "...you typically remember doing it." Unless you're like Carrie Wells, your routine activities will fade from memory. For example, driving to and from work every day for thousands of work days, you'll remember the route, but you're not likely to remember any particular drive to or from, unless something unusual happened and made an impression. ←Baseball Bugs What's up, Doc? carrots→ 20:01, 28 June 2015 (UTC)[reply]
I wrote a blog post about this a few months ago, The frustrating puzzle of dream amnesia. The basic message is that we rather surprisingly don't know the answer. Looie496 (talk) 03:00, 29 June 2015 (UTC)[reply]
In contrast to the OP's claim, it is possible to remember dreams, at least in part, by concentrating on them. ←Baseball Bugs What's up, Doc? carrots→ 03:41, 29 June 2015 (UTC)[reply]
Yes, though it's impossible to say how much is legitimately remembered and how much is just false memory (how would you even know?). I've read and can back up based on experience that the ability to recall is severely hampered by movement. Lying perfectly still and concentrating on the dream may give you some impressions, but it's usually a lost cause once you stand up. Matt Deres (talk) 19:36, 29 June 2015 (UTC)[reply]

Asteroids[edit]

If some asteroids collided with one of the solar system gas giants like Uranus, would there be an ouflow of matter or would it just be bottled up and increase the pressure? --109.149.199.246 (talk) 18:24, 28 June 2015 (UTC)[reply]

Something similar happened with Comet Shoemaker-Levy 9 and Jupiter. --TammyMoet (talk) 18:56, 28 June 2015 (UTC)[reply]
Actually more than one time: Category:Jupiter_impact_events. Ruslik_Zero 20:01, 28 June 2015 (UTC)[reply]
We know that asteroid impacts with Mars were enough to cause chunks of rock to be thrown all the way to impact on Earth (See: Martian meteorite). The fact that we've found 132 of them so far suggests that this is not an uncommon thing. Less remarkably, we've also seen chunks of moon rock arrive here on earth via the same mechanism. That being the case, it wouldn't surprise me if gasses from Uranus wouldn't also reach the escape velocity of that planet never to return. Uranus' escape velocity is about four times greater than Mars - but that would also result in impacting asteroids being accelerated to higher speeds as they head inwards to the surface.
Uranus has 27 moons and a bunch of rings that might also capture ejected material and prevent it from returning to the planet itself without it having to reach escape velocity.
It doesn't seem likely that the total mass lost in this manner would be likely to exceed the mass of the incoming impactor...but I think that would be hard to prove conclusively. SteveBaker (talk) 21:14, 28 June 2015 (UTC)[reply]
Mars and the Moon are rocky bodies, and rock shatters on impact. Jupiter is a gas giant, and Uranus is an ice giant. If matter were ejected due to impact, it wouldn't be in a form capable of impacting another planet such as Earth. I haven't researched it, but I would think that the gassy or icy nature of a giant planet would minimize the amount of ejected matter as opposed to a collision with a rocky planet. Robert McClenon (talk) 00:52, 29 June 2015 (UTC)[reply]
I wasn't suggesting that the matter from Jupiter or Uranus would be capable of reaching Earth - I was merely pointing out that there is clearly enough energy present in a falling asteroid to propel rocks outwards at speeds beyond escape velocity. Therefore, an impact with a gas giant could easily propel gasses outwards at similar speeds and result in a total loss of that material.
If you need a mental image of this - consider that if you toss a brick into a swimming pool, the splash goes way higher than if you drop it into a sand box from the same height.
Furthermore, you don't need to push the material outwards as fast as escape velocity if it can be pulled towards another body before it falls back...and Uranus has a rich set of moons that could capture sub-escape-velocity material.
There is no doubt that material will be lost in this way...the question is whether the amount of material that's lost exceeds the mass of the asteroid itself...which is a tougher thing to estimate.
SteveBaker (talk) 04:39, 29 June 2015 (UTC)[reply]
I seem to recall that the recent impacts on Jupiter shot fireballs back out. I believe the mechanism was that the impactor displaced the atmosphere, leaving a vacuum in it's wake, which then collapsed as superheated gases from the sides rushed in, creating a higher pressure, resulting in the fireball. Now high far out those gases were blown, I do not know. StuRat (talk) 04:34, 30 June 2015 (UTC)[reply]

Commercial space flight mishaps[edit]

Today's launch of a SpaceX Falcon 9 suffered a catastrophic explosion that destroyed the launch vehicle and its payload. This event was originally described as an "anomaly" by NASA. Later, at the first press conference, an FAA spokesperson described the occurrence as a "mishap." This terminology has specific meaning in the context of commercial space flight (it is defined by 14 CFR §437.75).
How were previous SpaceX failures categorized? Do we yet have a Wikipedia article listing anomalies and mishaps during commercial space operations?
Nimur (talk) 20:58, 28 June 2015 (UTC)[reply]
Have you seen this article[1]? The 'See also' and the 'External links' section; might help you...I haven't checked... -- Space Ghost (talk) 18:55, 29 June 2015 (UTC)[reply]

References

Do these new driverless cars comply with the three laws of robotics?[edit]

(see Three Laws of Robotics)

Q as topic. It seems that we're coming to a point where this is becoming an important issue. With something like a Roomba, it doesn't matter so much, but robots/AI/whatever being in charge of road vehicles with the potential to kill humans seems different.--87.112.205.195 (talk) 21:04, 28 June 2015 (UTC)[reply]

No. They almost certainly lack the intelligence to reliably identify human beings, and have no abilities to 'obey orders' beyond their designed purpose - driving on roads, in traffic. They aren't 'robots' in the general sense that Asimov describes, and don't need to be to do the job required of them. AndyTheGrump (talk) 21:11, 28 June 2015 (UTC)[reply]
Asimov's laws are exceedingly subtle in their wording and implications. For example, the first law says that a robot may not allow a human to come to harm because of it's own inaction. Arguably then, it should drive itself off and help the underprivileged rather than drive you to work this morning! On the other hand, because even with perfect software, there is a non-zero chance of you dying in a car wreck when you are driven by it, it might simply refuse to drive anyone anywhere.
The second rule says that says that a robot must obey the orders of any human providing it doesn't result in it infringing the first rule...so a car thief can order it to drive away and be dismantled for parts...maybe...depending on the fuzzy definition of "come to harm". The third rule requires the car to protect itself...but not if that would infringe the first or second rules...so that rule doesn't help here.
But it's very hard to understand these rules - what does "come to harm" mean for a human? Is loss of money "harm"? If not, then the robot will drain your bank balance in order to obey the command of a random vandal who thinks it's fun to tell cars to crash themselves into the nearest brick wall. If loss of money does constitute "harm" then the car might feel that it's better if you take a taxi today in order that it can protect it's own existence by not wearing out it's parts by driving you to work.
So no, for sure not. I don't think any robot of any kind will ever truly be able to follow these rules...and if it could, I think it would be an entirely useless device - so I doubt anyone will even try to implement them.
If you read the Isaac Asimov books that explore the three laws that he proposed, virtually every story is an example of a robot that does something exceedingly bad because it's following the three laws. In some cases, he claims that the robots can balance relative strengths of the three laws - but that seems to imply a complicated equation between "harm" as in loss of money, "harm" as in minor scratches and scrapes or "harm" as in death balanced against verbal orders that rage from outright demands to mere suggestions from humans.
The laws aren't even all that useful. Should a car save your life by driving into a crowd of 100 small children - doing so with enough care to ensure that they all live, despite losing limbs and spending the rest of their days in wheelchairs? That's a difficult ethical decision - and the laws don't help the robot to decide in the slightest.
SteveBaker (talk) 21:38, 28 June 2015 (UTC)[reply]
Isaac Asimov wrote some great stories, but they were science fiction. More specifically, his robot series were very old science fiction. How old? So old that when Asimov conceived of his Three Laws of Robotics, he never considered that robots would have computers in them! Asimov wrote some very excellent stories about robots, and some very excellent stories about computers, but I am not aware of any story where he connected the facts together: his robots are not programmed using computer software! (If anybody can cite any Asimov story which even slightly contradicts this, I will happily rescind my assertion!) Instead, Asimov posited a special machine, a positronic brain, which could not be reprogrammed: rules about decision-making were built into this machine. This is the opposite of how computers work! General purpose computing machines can be reprogrammed to follow any algorithm that we can describe as logical sequences of steps represented in a formal machine language!
If you would like to read a very interesting and much more modern robot science fiction story, The Robot and the Baby by John McCarthy explores software programming rules that govern the ethical behavior between robots and humans. The story was written only a few years ago, and its author was a foundational contributor to the field of artificial intelligence programming. Unlike Asimov's robots, this robot actually must follow a computer program to calculate how it interacts with a human when it is faced with an ethical conundrum. McCarthy's story provides a more accurate representation of the way a computer-software-controlled robot would behave, even down to the fact that every robot who runs the same software is expected to behave identically. Asimov's fictional stories about his U.S. Robots and Mechanical Men robots don't seem to follow this logic! If you want to know how Asimov envisioned robotic cars, and how his Three Laws would apply, we don't have to speculate: we can read Sally (1953)! Those cars are a lot more emotional than modern robotic cars (like Stanley), and their decisions never are described to be the results of calculation.
Nimur (talk) 21:53, 28 June 2015 (UTC)[reply]
I agree that Asimov didn't have software in his robots - but he very much did expect them to behave identically - in Little Lost Robot, Dr Susan Calvin tries to find one robot with a slightly modified design amongst 62 identical robots...and the job is tough because the odd one out has been ordered to "get lost" - which it does by impersonating the others. All 63 robots behave identically in almost every test.
His robots behave very much as if they were running software.
But, "...the fact that every robot who runs the same software is expected to behave identically" - only true in theory. In practice, robots only behave identically in identical circumstances. My robotic vacuum cleaner starts out from it's charging socket every day, and with the precise same software, trundles around vacuuming the house. But it doesn't take the exact same path every day. Microscopic variations in the floor make it diverge a little to the left and right as it drives along, and when it comes to the point where it has to turn left or right in order to avoid the opposite wall, it chooses (I think) based on how much it has to turn in either direction - sometimes the wall is at an angle of 89 degrees, other times it's at 91 degrees - so it's essentially random. Two AI robots will similarly have slightly different sensory inputs, slightly different initial circumstances. It's like identical twin humans. Sure, they start off with identical DNA - but before very long, they are living quite different lives.
The world is chaotic (in the mathematical sense of extreme sensitivity to initial conditions) - and robots are not immune to that.
But it is unfortunate that people who read Asimov's robot stories hold up these fictional rules as being a great starting point for real robots. It's certainly not true - and there is no evidence that Asimov ever believed that.
SteveBaker (talk) 23:52, 28 June 2015 (UTC)[reply]
What would be some good rules for real robots then, do you think? I know that people have suggested that all robots/AIs be taught the Ten Commandments. Which could either go very well, or very, very, very badly indeed. Another one (and probably the most important) should probably be NO SELF-REPLICATING MACHINES UNDER ANY CIRCUMSTANCES. --87.112.205.195 (talk) 00:20, 29 June 2015 (UTC)[reply]
Remember that Asimov started writing his robot stories in 1939 and formalized the Three Laws of Robotics in 1942. Computers in any sense that we know them had not been invented. The closest that there was to what we now call software was the early work of Alan Turing. In the 1950's, when Asimov was still writing robot stories, computers existed, but were so large and energy-intensive that they were not seen as an alternative to the hypothetical positronic brain. Asimov, like other science fiction writers of the Golden Age, was sometimes too optimistic and sometimes not optimistic enough in anticipating technology. Robert McClenon (talk) 00:40, 29 June 2015 (UTC)[reply]
The ten commandments are woefully inadequate - let's have a quick run-through:
  1. You shall have no other gods before Me. -- Hmmm - probably not an issue.
  2. You shall not make idols. -- Shame, with 3D printing, robots are pretty good at that.
  3. You shall not take the name of the LORD your God in vain. -- OK - make a note of that in the vocabulary database.
  4. Remember the Sabbath day, to keep it holy. -- Robots are good at remembering stuff: const char*sabbath="Sunday"; ...but "keeping it holy" evidently pulls in a whole raft of other rules about what exactly that means. I don't know that we want our robots to basically shut down and praise god on one day of the week.
  5. Honor your father and your mother. -- Don't have those, so no problemo.
  6. You shall not murder. -- Aha! A useful law finally! Not quite as good as Asimov's 1st law, but definitely A Good Thing. God evidently doesn't care if you allow someone to die through your inaction, or if you cause them non-fatal injuries during extreme torture sessions.
  7. You shall not commit adultery. -- Um...yeah, that's probably a good one to toss in there, but until robots get considerably more anthropomorphic than they currently are, it's probably not an issue for anything fancier than an artificially intelligent dildo.
  8. You shall not steal. -- Ooooh! Good rule! Asimov's robots are perfectly OK with stealing (depending on how you define "harm to humans").
  9. You shall not bear false witness against your neighbor. -- OK, no problem there.
  10. You shall not covet stuff. -- Not hard for an arguably emotionless device.
So it seems likely that a very few lines of code would take care of most of those things - but in truth the only useful protections these offer is that the robot won't murder or steal. But sadly, there is no prevention for maiming small children or deliberately driving into other cars on the freeway just to see how much damage they'll take. Since the robot doesn't have to follow anyone's instructions, you could get into your car at the end of a long day at work and discover that it just doesn't feel like driving home today.
The ten commandments are totally useless here. Asimov's rules work better - but they're still nowhere near adequate.
We'll need *SO* many rules. First of all, the robot must ingest the entire corpus of the constitution and laws of the places it's going to visit - including case law and 'common law'. It'll have to know that driving at the speed limit on a freeway is going to upset all the humans behind it - and breaking the law by driving 10% faster is considered acceptable. It'll have to follow customary rules of politeness..."Thou shalt not yell 'FUCK YOU' at maximum audio volume and/or raise robotic middle finger at persons who cut you off in traffic - even though your owner does exactly that when in 'MANUAL DRIVE' mode."...for example.
The rules and customs of life are vastly more complex than can possibly be encapsulated in a small, convenient number of rules. Robots will either need hundreds of millions of lines of code to cover all of the eventualities (which, I guarantee will be bug-ridden as all hell) - or they'll have to learn, making many horrible mistakes along the way...just like people.
SteveBaker (talk) 03:10, 29 June 2015 (UTC)[reply]
Industrial robots are "made safe" by prohibiting humans from getting within their reach while they are on. Many guided missiles are designed to self-destruct if they lose target guidance. Of course they are intended to harm humans, of the "other" political persuasion. Roger (Dodger67) (talk) 12:05, 29 June 2015 (UTC)[reply]
Indeed - but that tactic isn't going to work out so well with robotic cars! SteveBaker (talk) 15:08, 29 June 2015 (UTC)[reply]
Although the story of Light Verse (short story) suggests to get the identical behaviour the robots may have required adjustment in some cases. Nil Einne (talk) 13:42, 29 June 2015 (UTC)[reply]
Indeed - and when your computer starts misbehaving, the most common suggestion is to "Reinstall Windows" to return it to normal behavior...but suppose you've tweaked settings and installed a bunch of plugins and applications, and made your own folders and such. Doing a "recalibration" by reinstalling Windows will wipe out a bunch of positive characteristics that your computer has picked up over the years - and you might well be upset if someone did that without asking you first. That pretty much mirrors Asimov's story - so it's as true with simple computers as it is with "positronic brains". SteveBaker (talk) 15:08, 29 June 2015 (UTC)[reply]
I'm amazed that robotic cars were allowed without some type of national debate on whether we want to go in that direction. However, when inevitably a robotic car kills somebody (possible due to a glitch, or perhaps just something unavoidable as happens with human drivers), then I'd expect the debate will take place. Another example of a tombstone mentality. StuRat (talk) 20:07, 29 June 2015 (UTC)[reply]
Well, they aren't exactly "allowed" yet. The various states that have approved testing still require the car to have pedals and a steering wheel and that there be a human driver ready to take control at any moment. I saw a report a week or so ago that pointed out that there are more robotic 18 wheeler trucks being tested on US highways than robotic cars...which is not something that's being widely discussed.
But the "top-down" approach of testing complete, fully-autonomous cars is only one approach. Robotic cars are also subtly creeping in through a "bottom-up" route. We have cars out on the roads (that you can actually buy) that apply the brakes if you get too close to the car in front or if you reverse into a perilous situation. Cars that can parallel park themselves are becoming pretty common, so are cars with sensors that tell you if you're drifting out of the lane, or pulling out in front of another car. The 2008 Lancia Delta actually biasses the power steering to gently nudge the car back into it's lane if you start drifting over the line. Both BMW and Tesla are soon to launch cars that do lane changing automatically. According to Lane_departure_warning_system#Vehicles there are a whole slew of cars that will steer themselves on freeways due to appear this year.
This is "robotic-cars by stealth" - adding one small feature at a time, letting people gain confidence in them, until one day you realize that the human driver simply isn't needed anymore. The worrying thing is that people are going to start depending on these features - so the "robotic" nature of existing cars will sooner or later be the indirect cause of human deaths.
So don't expect to necessarily wake up one day and be able to buy "The First Ever Fully Autonomous Car"...more likely, by the time a fully autonomous car appears, we'll all be driving cars that allow you to sleep, read and use your phone as they drive down freeways - but which have to wake you up to turn off the freeway...we're probably only a couple of years away from being able to do that. Cars that are 70% then 80%, then 90% autonomous in more and more road conditions will make the journey to fully robotic cars in tiny, tiny steps. The day you can put your kid into the car and tell it to drive him/her to school while you get on with something else will be just a tiny additional step from a car that needed you to get through 4-way stops, but did everything else automatically.
But 90 to 95% of car wrecks are caused by driver error...and even fairly klutzy robots should be able to do better than that. It's just that publicity will show the rare (but terrifying) cases of software bugs killing people - and conveniently ignore the thousands of lives that are saved by the very same software. Humanity really has to get to grips with the nature of statistics and risk assessment...but we're really not very good at it.
SteveBaker (talk) 03:35, 30 June 2015 (UTC)[reply]
I'm not as confident as you that robotic cars will be safer, at least initially. Each driver needs to make thousands of decisions correctly to avoid an accident each day, and that seems like a high rate of accuracy for the first gen of robotic cars to match. Presumably the current crop is "going for the low-hanging fruit", and only driving in the easiest situations. To make them able to handle all situations, and do so at least as well as a human, could take quite some time. StuRat (talk) 04:25, 30 June 2015 (UTC)[reply]
Your argument is perfect for aircraft as well. A pilot has to make thousands of decisions to keep an airplane flying safely. It turns out that in the most modern aircraft, humans cannot keep up with the decision making. They lose control and crash. Computers can make decisions faster than humans. Multiple computers can work together to make multiple decisions at the same time. Returning to humans driving cars - humans are easily distracted. Humans make very poor decisions. Humans cause most accidents - relatively few are caused by things like a deer running into the road or a tire falling off a truck. Most are caused by humans being stupid selfish humans who care more about something trivial and pointless than they care about paying attention to the road. 209.149.113.185 (talk) 13:10, 30 June 2015 (UTC)[reply]
To my knowledge most passenger aircraft are still flown by humans, and even drones typically have remote pilots, so I'm not sure what you meant by "in the most modern aircraft, humans cannot keep up with the decision making". Perhaps you mean the pilot says to turn right and the computer then adjusts all the control surfaces to accomplish that goal ?
One problem with relying on computers for critical things is how often they break down. A human brain can hopefully last for decades without a major failure, like a stroke, but computers only seem good for a few years, typically. So, you would need to frequently replace all the critical bits (which then requires testing to establish that nothing went wrong during the replacement) and provide triple redundancy (so it can take whatever action 2 of the 3 suggest). StuRat (talk) 13:20, 30 June 2015 (UTC)[reply]
Correct - Jet aircraft (and especially fighter jets) are uncontrollable. Computer sensors make thousands of adjustments every second. The human has a joystick and gives general commands, such as "turn right, however that works, but I don't care how it works, I just want to go right if you don't mind." Drones are the same. A computer maintains GPS track, altitude, and stability. The human sends requests to the computer, asking it to do things. In cars, humans still have a very direct connection between the steering wheel and the tires. The connection between the brake pedal and the brakes is mostly direct (automatic-braking systems step in to try and keep the meat sacks from having an accident). The gas pedal is being slowly moved away from the engine. It is being sent to a computer that decides what is the best way to increase or decrease speed, primarily to increase fuel efficiency. Now, cars are coming with automatic braking to stop the car when the organ donor behind the wheel is too busy texting to pay attention to the cars stopped at the traffic light. Many also have automatic parallel parking assist because making all those decisions about turning the wheel and pressing the brake is far too difficult for a simple human brain to handle. In my opinion, there will always be a group of people who refuse to let a computer drive their car, just as there are still people who refuse to use automatic transmission because no computer could ever shift gears better than a human. 209.149.113.185 (talk) 14:22, 30 June 2015 (UTC)[reply]
Sounds like you are describing fly-by-wire. Yes, that's a type of automation, but not nearly as concerning as when the bots have total control. StuRat (talk) 18:12, 30 June 2015 (UTC)[reply]
On a typical 8 hour transatlantic airline flight, the pilot and copilot fly the plane for less than 5 minutes...and the plane could manage that by itself if it had to. They are there to cover the emergencies that the autopilot can't cope with...but most of the time, they are dead weight. The trouble with cars going the same way is that (just with airline pilots), it's hard not to get bored and fall asleep - so in that one time in 10,000 when you need to be alert to handle that emergency that the computer can't handle...you're not going to be alert enough to deal with it. So we find that there is an effect akin to the uncanny valley where the smarter the computers get, the more danger there might be - until they cross some magic threshold of intelligence when it's safer to let the computer handle the emergency than it is to expect a human to do it.
What's interesting about the 'bottom-up' approach to automating cars with increasing levels of 'gadgets' like lane-keeping is that we're actually using the computers to handle the emergency cases and NOT take over the routine driving stuff - which is the opposite of what happened in the airline industry. SteveBaker (talk) 03:50, 1 July 2015 (UTC)[reply]
I would think boredom would be more of a problem with pilots, since there are fewer decisions to make mid-flight. In a car, even on a straight freeway, there's still decisions on when to pass, etc. This might be another reason why a copilot is needed, just to keep the pilot occupied with conversation to keep him awake. StuRat (talk) 04:45, 2 July 2015 (UTC)[reply]

Big Bang[edit]

Are there any tests made for the Big Bang like Abiogenisis? -- Space Ghost (talk) 23:02, 28 June 2015 (UTC)[reply]

Observations are made, see Observational cosmology. Bubba73 You talkin' to me? 00:27, 29 June 2015 (UTC)[reply]
High energy particle accelerators (such as the Super Proton Synchrotron and the Large Hadron Collider, both at CERN, and the Relativistic Heavy Ion Collider at Brookhaven) create quark–gluon plasmas. These experiments replicate conditions during the quark epoch, when the universe was less than 10 milliseconds old. Detailed analysis of the results from these experiments have (so far) confirmed that the Standard Model of particle physics is correct. And the Standard Model allows cosmologists to make very precise predictions about the subsequent evolution of the universe that very closely match observational data such as the cosmic microwave background radiation and the relative proportions of elements in the universe. Gandalf61 (talk) 10:22, 29 June 2015 (UTC)[reply]
I just hope it doesn't take years to understand what you guys stated... Anyway thank you both I'll read it soon. -- Space Ghost (talk) 19:02, 29 June 2015 (UTC)[reply]

Life form(s)[edit]

Have scientists yet come up with simulations of life forms that may be available at a Goldie lock zone of/in a planet, during a steller evolution? Say for example, a T Tauri Sun’s ray will destroy the Earth in its goldie lock zone. If we move Earth to a distance where it will receive the equivalent temperature of our yellow sun and rotate it like it rotates in this solar system, and so on. If so, what are the results of life forms of steller evolutions? -- Space Ghost (talk) 23:48, 28 June 2015 (UTC)[reply]

Just a minor point but it's spelled "Goldilocks" as in the character from Goldilocks and the Three Bears. Dismas|(talk) 07:13, 29 June 2015 (UTC)[reply]
Agreed. It's from in the story where she found one bowl of porridge was too hot, another too cold, and one was "just right". StuRat (talk) 18:00, 29 June 2015 (UTC)[reply]
Sorry friends, I didn't know. I heard the word in a science telecast program once. -- Space Ghost (talk) 19:01, 29 June 2015 (UTC)[reply]
Current evidence, gathered by examining star and planet formation, is that the star forms before the planets are formed. Therefore, your question is based on something that is not expected: a planet forming before the star forms. 199.15.144.250 (talk) 11:35, 29 June 2015 (UTC)[reply]
[[File:|25px|link=]] I recalled, then again, computer simulations can do any kind of tricks these days. -- Space Ghost (talk) 19:01, 29 June 2015 (UTC)[reply]
It's not just the star's formation that could wipe out life on a planet. The Sun's expected red giant phase should do so on Earth, not to mention a nova or supernova. StuRat (talk) 17:58, 29 June 2015 (UTC)[reply]
But I thought the hotter the Sun becomes the different molecules it produces, considering a red giant and considering its a population I star, shouldn't it produce molecules which should satisfiy the hypothesis of alien life form(s)? -- Space Ghost (talk) 19:01, 29 June 2015 (UTC)[reply]
Stars don't directly produce molecules at all, as it's too hot for molecules, which are composed of 2 or more atoms joined together, to form. They do produce larger, more complex atoms (see chemical element) the hotter they get (although the heat isn't just the cause, but also the result). Life does require these larger, more complex elements, but they can be leftover from a previous generation of stars. In our case, our Sun mostly just produces helium now, and all of the heavy elements on which life relies come from some supernova long before the solar system was formed. StuRat (talk) 19:47, 29 June 2015 (UTC)[reply]
See Goldilocks zone. StuRat (talk) 17:56, 29 June 2015 (UTC)[reply]
Okay, thanks. -- Space Ghost (talk) 19:01, 29 June 2015 (UTC)[reply]
You should have notified users of this directly, including User:Russell.mo who posted this question. μηδείς (talk) 22:19, 30 June 2015 (UTC)[reply]