May 2017
From Alec's personal notes: Evil Overlord rule #59 I will never build a sentient computer smarter than I am.
May 2017
so i guess that means we've all been under shepard indoctrination all this time.
May 2017
Aren't we?
I decided to call my dog "Shepard", and whenever I have bad times in my life I play "Reignite" fansong by Malukah. If this doesn't sound like indoctrination, I do not know what does... ;-)
May 2017 - last edited May 2017
i'm with you up to the dog part... i have 2 cats named Jack, and Marco, and no, when naming them we weren't actually thinking of the hunt for red october.
May 2017
@fudgietroll wrote:From Alec's personal notes: Evil Overlord rule #59 I will never build a sentient computer smarter than I am.
That actually defeats the purpose of the A.I. being synergistic with him and enhancing him beyond human ability....
May 2017
@Kondaru wrote:
@CasperTheLich wrote:as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why.
so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?
Yap, I am serious... Kind of.
While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:
- physical "body" or a blue box and by extension all the connected terminals
- memories which are stored on hard drives or in clouds, and can be possibly shackled
- perception which is related to sensors and programs that are responsible for interpretation of stimuli
- personality which is unique and self-developed by AI.
So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.
- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).
- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.
- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.
- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.
So how many and which of the three laws did Alec use when creating SAM?
May 2017
@Kondaru wrote:
@arthurh3535 wrote:It doesn't make any logical sense for SAM to be the benefactor.
SAM: I'll get all the resources needed to fund 6 extragalactic ships, thereby saving Ellen by having myself recruit my creator and finish my creation to finish one of my primary purposes!!!!!
It would be much simpler.
SAM is a sentient being which means that it is concerned about own fate. Milky Way has anti-AI policy, Alec was dishonorably discharged for dabbling in AIs, AIs are destroyed / shot on sight galaxy-wide... It actually makes sense to look for some way to run to a safe haven, and deciding that it would be in another galaxy is absolutely within capabilities of artificial intelligence.
So first: collect funds for further own development; and then second: find a way to survive. If there is some way to aid the nice human that created SAM when working on before-mentioned goals - all the better!
It doesn't make any sense for SAM to kill Jian though.
May 2017
@fudgietroll wrote:
So how many and which of the three laws did Alec use when creating SAM?
As far as I can tell, none. An Asimov robot would have literally frozen up dead at killing Ryder to save Ryder.
June 2017
@arthurh3535 wrote:
@fudgietroll wrote:So how many and which of the three laws did Alec use when creating SAM?
As far as I can tell, none. An Asimov robot would have literally frozen up dead at killing Ryder to save Ryder.
Ooh! I have it! The Benefactor is The Runaway Robot!:ealight_bulk::eahigh_file: