Benefactor (contains spoilers)

by UndeadHessian
Reply

Original Post

Re: Benefactor (contains spoilers)

★ Pro

Although I dis not see SAM as a real contender for the Benefactor, because I would find that extremely lame and far fetched considering how SAM has behaved so far, there is something I remember about Alec Ryder.

 

He reasoned that a symbiotic relationship would stop the threat of AI as the AI is part of a being and therefore not an enemy but part of itself. However, I do find that a bit naive since SAMs can be transferred from one person to another and therefore can sabotage a host and then jump hosts. We can't really assume that an AI with all its intelligence wouldn't be able to figure that out.

 

I mean, for all we know SAMs ultimate purpose is to join up with Ellen Ryder and the other Ryders are simple vessels to be used until that's possible. It could've been a directive from Alex that got a bit too engrained in SAM.

 

So yeah, if that were the case I could actually follow what SAMs been doing so far.

Message 91 of 109 (692 Views)

Re: Benefactor (contains spoilers)

[ Edited ]
★ Pro

@CasperTheLich wrote:

while scott says he's never been through a relay, it could just mean there was never a reason to deploy him anywhere. however, i don't recall him saying that he'd never been through one (but, i could also be mistaken), just that he always wondered what was beyond it... not really the same thing, though it could allude to the same thing i suppose.

 

as to him never mentioning it, could just be an oversight by the writers. wouldn't surprise me one bit.

 

---edit

 and by oversight by writers, i mean that it's possible that... say, the writers either forgot that being stationed at a relay would mean scott was rapid deployment, or just didn't think to mention it.


Actually, he does mention that he was helping arm and train colonies to hold off raids, kind of like Ash/Kaiden and MSV Arrow, and they halted some pirate attacks. Actually, a fun little Easter Egg the writers could add would be having Scott talk about meeting Ash/Kaiden while on this mission

Message 92 of 109 (683 Views)

Re: Benefactor (contains spoilers)

★ Guide

It's possible he mentions it if your playing scott but if your playing sara he makes no such reference as sara he basically only says he was babysitting a relay if you ask him about his military service

Message 93 of 109 (673 Views)

Re: Benefactor (contains spoilers)

★★★★★ Guide

@CasperTheLich wrote:

as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why. 

 

so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?


Yap, I am serious... Kind of.

 

While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:

- physical "body" or a blue box and by extension all the connected terminals

- memories which are stored on hard drives or in clouds, and can be possibly shackled

- perception which is related to sensors and programs that are responsible for interpretation of stimuli

- personality which is unique and self-developed by AI.

 

So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.

 

- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).

 

- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.

 

- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.

 

- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.

Message 94 of 109 (668 Views)

Re: Benefactor (contains spoilers)

★★★ Pro

@Kondaru

i also think we shouldn't guess how artificial intelligence will actually work, what can influence it's functions, or how viruses would effect it, until someone actually creates true ai, or at least a functional blue print of it. though in the context of mass effect, i'm not really sure i get how most of their ai actually functions anyway. so, i'd just be guessing. nor am i an expert of programming or computer tech.

Message 95 of 109 (657 Views)

Re: Benefactor (contains spoilers)

★★★★★ Guide

@CasperTheLich wrote:

@Kondaru

i also think we shouldn't guess how artificial intelligence will actually work, what can influence it's functions, or how viruses would effect it, until someone actually creates true ai, or at least a functional blue print of it. though in the context of mass effect, i'm not really sure i get how most of their ai actually functions anyway. so, i'd just be guessing. nor am i an expert of programming or computer tech.


It's TV/Movie/Video-game AI, it functions as plot dictates.

Message 96 of 109 (646 Views)

Re: Benefactor (contains spoilers)

[ Edited ]
★★★★★ Guide

@CasperTheLich wrote:

@Kondaru

i also think we shouldn't guess how artificial intelligence will actually work, what can influence it's functions, or how viruses would effect it, until someone actually creates true ai, or at least a functional blue print of it. though in the context of mass effect, i'm not really sure i get how most of their ai actually functions anyway. so, i'd just be guessing. nor am i an expert of programming or computer tech.


While I do agree that we should not get *too* serious when discussing AI in video games, I do not believe that suggestion that "we shouldn't guess" anything is valid - *at least* as far as ME games are concerned. Well, AIs were put into the franchise, are vital to plots of *all* the Mass Effect games, and there was a lot of effort from Devs to explain some basics of how they work both in-game and in codex entries. We cannot just say "OK, it is hardly understandable magic box, let's accept that it can do anything and anything can be done with it". I want to believe that there is at least some "science" to the "fiction" element with MEs being advertised as SF games.   ;-)

 

So yes, I believe I am entitled to make some assumptions and to develop some parallels that allow me to understand Mass Effect AIs better. I am using some basic logic to deconstruct the idea, and I am using those deconstructions to experience the game. I do not think that playing ME2 and ME3 would make much sense without allowing EDI or Legion some genuine personality rather than just perceiving them as "meh, something, something, I do not get it, they doesn't really matter to me at all". And should I restrain myself from making any assumptions about alien species as well because, You know, they are "alien"?

 

Of course, Devs *can* (and often *do*) apply some elements that are inconsistent with my vision and understanding, which is their privilege - even if in some cases it just makes those elements cheap and silly to me.

 

At the same time I can just discuss possible ways of hacking AI with You and other users - which is probably not very productive, but still remains a nice and intelligent way of spending time.   :-)

Message 97 of 109 (636 Views)

Re: Benefactor (contains spoilers)

★★★ Pro

@Kondaru

well, lets try flipping this around then. if you were going about making a virus to corrupt an ai... such as EDI, or the zha 'til (the latter you'd first need to define them technologically, as we know so little about them) how would you go about it? the intent would be to turn them hostile against organic life, for a useful purpose. such as but not limited to: turning the zha 'til into a living weapon to simply divert the attention of the protheans or somesuch, i'm just using that as an example?

 

or, on second though this is getting way off topic. maybe a new thread? something like "creating a technological singularity with the intent of causing the end of the world"? just a working title.

Message 98 of 109 (622 Views)

Re: Benefactor (contains spoilers)

★★★ Pro

@Kondaru wrote:

@CasperTheLich wrote:

as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why. 

 

so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?


Yap, I am serious... Kind of.

 

While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:

- physical "body" or a blue box and by extension all the connected terminals

- memories which are stored on hard drives or in clouds, and can be possibly shackled

- perception which is related to sensors and programs that are responsible for interpretation of stimuli

- personality which is unique and self-developed by AI.

 

So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.

 

- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).

 

- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.

 

- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.

 

- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.


Do you think you could force a change of personality by "hacking" in memories or experiences that otherwise wouldn't exist? I would expect SAM's personality to adapt and change with the addition of my input as well as my fathers. Doubling the sample size. It also appears SAM's connection works the same way for my sibling (and probably all members of the pathfinding teams). That's a scary thought - Liam is helping to write SAM's personality. If you uploaded 25gb of renegade Shep would SAM blink?

Message 99 of 109 (609 Views)

Re: Benefactor (contains spoilers)

★★★★★ Guide

@CasperTheLich

 

One immediate way would be to provide invalid sensory input by hijacking sensors or terminals. If EDI sensors were twisted for her to perceive Joker as a Reaper, it/she would probably shot him on sight - which is enough.

 

Another method would be to *convince* AI that organic beings should be destroyed - which can be done either by altering reasoning (re-programming) or by providing good reasons (possibly with memory uploads / replacing, but can be also done by simply listing some reasons in a persuasive way). As far as I understand that is how Reapers indoctrination worked: they were providing good reasons for some specific behavior, and then reinforced those reasons with "programs" that indoctrinated party willingly accepted, but which were actually taking over or "shackling" both organic and synthetic beings.

 

Possibly weak points in programming can be identified and then hacker can feed AI with something similar to malicious hyperlinks. If we assume that huge number of programs need to be run for AI to be operational, it is possible to infect AI with data that is related to some of the "petty" programs without AI being aware of the fact or having time to counter. In a similar way people are aware of what they see, smell, and feel, but are not directly aware e.g. how their heart beats or what is in the air they are breathing. AI can be "aware" and "control" major processes, but it may not be able to consciously care for *all* of them. Perhaps it would thus work in a similar way that diseases and vaccinations work for humans: young AIs still base on the original programs and procedures, and those can be easily exploited. With experience AIs learn how to defend themselves, and do replace those original programs and procedures - which are then much more difficult to circumvent. It could also be that e.g. something similar to DDOS can be used to flood AI with data that forces it to analyze complex input and thus AI loses ability to thoroughly control other "basic" processes. It would be then enough to smuggle some hidden code with "program update" tag; or to prod AI with some false stimuli.

 

Physical interference, e.g. physically replacing data cores or processors always remains the greatest risk, even as it is easiest to detect.

 

 

@jpcerutti1

 

Well, as always it depends on how we define personality. I would say that personality is something that is responsible for wishes, sentiments, reflexes... Sure it results from experience, feelings, and perception. Sure, by altering memories or perception one would definitely impact personality. But I would not expect the change to be instant.

 

Let me try a parallel: let's assume that wife loves her husband, that he is good for her, provides money, safety, etc. It lasts for years. Then it occurs that the husband is a psychopath and a serial murdered. Riiight, she knows this is not good, but it does not necessarily changes her *feelings* toward him - she is used to trusting him and depending on him. Then let's say that he hits her. OK, that is even worse (eh, this is relative, and perhaps depends on perspective, but I would say this is worse *for her*). But she has never considered living without him, so even if she starts to fear him instead of loving him, she is still not able to change her ways *just then*. It will take her a moment to reconsider her position, and possibly to react in some meaningful way.

 

Another example: let's assume that we have an AI that is embedded in a combat platform and tasked with military duties. This AI constantly fights, develops thousands of programs and algorithms for clashes, skirmishes, and battles. Then someone manages to replace all its memories with an illusion of being a nurse. Probably replaces the combat platform with some benign as well. Sure, AI believes to be a nurse, understand what being nurse is about... But, hey, all the programs and algorithms it has are still for combat rather than for nursing, right? In the result our AI is a bit sloppy as a nurse, until it develops some actual nursing procedures. At the same time, when given a gun or two, it would easily revert to its old programs - even though it would not truly understand why it is such a good fighter and such a poor nurse. And true - it would change IN TIME, so e.g. after several months or years all those old combat programs are surely replaced with nursing programs... But it is never instant.

 

And for the Shepard thing... Well, there is no denying that SAM would *need to* change after such a feed. After all, original ME trilogy changed all of *us*. I bet it is much stronger in this respect than any Reaper's indoctrination!   ;-)

Message 100 of 109 (599 Views)