Star Trek Discussion Question Responses
(In Star Trek TOS: "The Ultimate
Computer," an advanced AI is deployed to take over Enterprise operations.
Although it proves efficient in its decision making, it begins to behave
erratically and ultimately needs to be disconnected.)
Stephen: Consider the question of whether, and how much,
to cede high-stakes decision-making power to AI. In answering that question,
what aspects of making emotionally-fraught, ethically-based decisions do humans
tend to be good at, and not-so-good at? What aspects do computers tend to be
good, and not-so-good at?
Rishi: In addition to believing in insanely-long
paragraphs, I also believe that humans should have the final decision when it
comes to decision-making power given to AI. That is because, to me, the faster
reaction time is not worth having the AI make a mistake and mistakenly fire a
missile and harm/take countless lives in a situation where the missile is not
necessary. If a human had the final decision in this situation, they would have
the opportunity to determine if the threat was real and be able to make an
educated decision. Humans tend to be better at realizing the difference between
false alarms and true worries and computers have no way of determining that.
Humans and their emotion allow them to make responsible decisions for the most
part and allow them to be emotionally intelligent and not allow their emotions
to be the controlling factor in their decision-making skills. Also, humans have
the capability to be ethically intelligent because they can determine the
ethical benefits and risks of a certain decision. In the Star Trek episode, M-5
was willing to sacrifice 1,000+ lives for itself to survive. In my opinion,
that is not a ethically smart decision and humans would have tried to find a
better solution to the situation. Humans would try and find the best solution
that minimizes lives lost, and Captain Kirk did that when he convinced the M-5
that what it was doing was wrong, and he believed that Captain Wesley would not
fire once their shields went down. Computers would not have that belief and
would fire regardless. Computers are also good at things in their own way, in
terms of response time and the accuracy of their attacks. They are able to be
really quick with their decisions because it is pre-programmed and they are
just executing instructions. However, in terms of making decisions, that is as
far it goes for computers. Overall, I think that computers should definitely be
used to make attacks go quicker and have them be able to make some decisions,
however, the final decision should be approved by humans so that the decision
made by the computer can be checked and make sure that its decision is correct
and appropriate for the situation.
John: I am personally of the opinion that there is no natural
process that cannot be replicated with a machine or computer with the correct
lines of code. This includes thinking, moral rationality, and decision making.
Currently, no, I would not give power of decision for large, important
situations to the kinds of computers and AI that we have. We haven't yet
developed a sophisticated enough agent that can comprehend morality and use it
to factor into its decisions. I think a major contributing factor to this
hurdle, as I mentioned on movie night, is the fact that humanity itself has yet
to come to a consensus of what is "right" and "wrong", and so we are wholly
incapable of coding those things into an AI. As it stands now, that is
certainly one of humanity's advantages over computers- the ability to act as a
moral agent.
While I don't prescribe to the belief that computers "never make
errors" (quantum uncertainty and cosmic rays will always pull us just short of
100% accuracy), they are certainly less error-prone and incredibly more
efficient at calculations than humans, to name some clear and inarguable
advantages that computers have over humans. However, I believe humans still
have at least one advantage in the area of calculations, that being intuition.
For example, a human may look at a problem and be able to instinctively and
intuitively make several connections that lead to a solution, which a computer
would otherwise have had to calculate by exhausting likely vast amounts of
incorrect solutions before arriving at the correct one. Again, though, given
the unknown sophistication of future technology, I don't see this as being an
impossibility for future AI. There is clearly some way that a computer can
intuit (our brains do so thousands of times a day), we just have yet to figure
out how to write it in code.
Abigail: AI computers have the potential to streamline decision
making and various processes. They can analyze vast amounts of data and
scenarios to come to a logical conclusion. The benefit of using AI computers is
that these systems are consistent, focused, and detailed. Tasks like pattern
recognition are easy for them. However, they can struggle with unique
situations that do not have a lot of training data to guide them. Situations
that require or seek empathy can also be difficult for them to process. The
"logical" answer is not always an appropriate answer. Although, I am not saying
that AI cannot reach a conclusion that is considered empathetic or
compassionate.
Humans, on the other hand, are flawed beings who may not compute as fast or be
as dedicated as a computer. We are beings that are easily distracted and not
nearly as efficient as a computer. However, humans can relate to the human
experience which is one benefit of human decision making. They are better at
naturally analyzing the emotional experiences of other humans and relating it
to their own. In their considerations, they are drawing on their own
experiences which can be beneficial in some cases. We experience love,
suffering, joy, pain, hope, etc. and this makes us better at decision making
than machines that are "more efficient." Think about court cases with humans as
the decision maker. Humans should make decision when it comes to judging the
actions of other humans.
Alip: Humans tend to be good at evaluate situations with reasoning,
they can understand things with different context, like cultural differences,
non-verbal cues, they can also adapt to different situation with a flexible
mindset, human react to each situation differently, that's why they weigh
situations based on their personal experience and philosophical reasoning.
Humans are biased, that's one part that humans are not so good at, biases can
cause delay when making important decisions.
Computers tend to be good at processing data and analyzing it, it is known that
computers can process and analyze extensive amounts of data quickly. Computers
are also nonbiased, they follow their instructions from their programming, and
will make consistent decisions based on how good the result would be if they
take certain actions. Computers lack emotional intelligence, computers can
indeed do everything faster than humans, but they do not think the way humans
do.
Matt Rose: Any decision that involves the possibility of losing human
life should be entirely made by a human. Not because a human would be better at
making the choice; an AI would certainly make a more logical decision. But It
should be a human who is responsible for the consequences of said decision. In
fact, any decision that directly affects the livelihood of humans should be
made by a human. Because a human can relate to a fellow human, unlike a
computer that only sees those people as a number.
However, AI would be excellent if making any other type of high-stakes
decision that does not involve the livelihood of humans since they see problems
from a purely analytical and logical view.
Coop: High stakes decision making should not be given to AI unless
the decision is based on factual information. AI can generate facts and
organize them to help choose between one thing or another. When it comes to
situational decisions where the result is unknown, AI cannot possibly know any
better than humans how to weigh the possible benefits and consequences of that
decision. AI theoretically should be able to see possible outcomes and whether
or not they are successful, but humans more involved with a decision have a
better idea of knowing to what extent an outcome is successful or not. The
advantage that AI has over humans when it comes to making emotionally fraught
decisions is that AI has no bias. When humans make decisions, they tend to try
and benefit themselves or their associated group while others may suffer from
that choice. AI has no emotional ties to anyone, and can make decisions to try
to generate more positive outcomes than negative outcomes.
Jordan: High-stakes decision-making is a skill that some humans
have a hard time grasping throughout their lives, especially if they have never
experienced or been placed in a position to do so. In my opinion, this form of
decision-making requires certain characteristics that are usually developed as
a result of exposure or simple experiences. Therefore, to cede this power to AI
would require a generous amount of consideration to say the least. AI has been
known to make decisions and come to conclusions in a more binary form of
understanding — lacking the consideration of emotional and external party
impact. The only way that this type of decision-making could be left to AI, in
my opinion, is if there is a way to guarantee that all parties and scenarios
would be considered in the most optimal way. If not, the margin for error
remains extremely large, especially in high-stakes situations.
Humans tend to be good at making emotionally-fraught and ethically-based
decisions due to their ability to draw back on experiences and create cause and
effect examples. Our lives as humans serve a large part in our moral compass —
therefore, leading to more informed decision-making. More specifically, in
terms of decision-making, humans are better at teamwork (if necessary), risk
evaluation, and even rationality. In comparison to computers, humans could
definitely better handle circumstances that are more layered in appearance
(emotional, logical, objective, subjective, etc.) On the other hand, computers
would be better at blanket circumstances that could easily be solved with one
or two considerations max. In terms of characteristics, computer would be great
at gathering information on a situation, creating logical plans, or any form of
sequential analysis. Overall, both parties provide benefits, and as time
progresses it will be interesting to see how computer continue to develop in
this area.
Raze: I would cede some high-stakes decision-making for AIs. Mostly
because there is no moral or ethical barrier it can't cross. Let's take an
example of choosing a life. If you choose someone, then the other one dies. A
human would take more emotional and relationship side of things and pick the
person they know better. But an AI would most likely pick the person with the
most potential or self worth. So I would limit its capabilities to act solely
as a tool for humanity to use to solve select problems.
Humans are good at making ethical decisions. Our jury system is a good example
of this. We take sitizens who aren't connected to the prosecutor and defendent.
And they are able to make a good conclusion based on the evidence provided in
court. As for AI, they'd be good at taking the evaluating the evidence but they
wouldn't be able to come to a good conclusion without more evidence.
Mr. Ford: When answering the question of how much power to give to A.I,
if any, when looking at high stakes decisions, the clear answer is a clean
70/30 split. A.I is a logical tool, using high powered computers to calculate
problems which would take humans a significantly longer time to do. On top of
this, A.I at the moment at least, is an incredibly fast and useful tool. Any
human can use this tool to hasten their work while keeping the quality of the
work up, however any technology is prone to bugs and errors. It must be
considered that a machine can make a mistake through one of these defects, and
if this occurs, who's to blame?
Looking at these bugs and defects, as well as the moral question of how logic
can differ from the "ethical" choice, it must be inferred that not all power
can be given to an A.I. Human beings can feel on an emotional level what is
"right" and "wrong". How is a computer going to decide what is right or wrong
when they don't have the same emotional intelligence of a human. Maybe one day
they will, but at the moment, a humans touch is needed to make the decision on
if an A.I's work can be used or not.