Although we are not ready to begin our AI project, we are close
enough to begin forming the development team. Presently, we have two
confirmed team members. We're now actively searching for Singularitarians
with software engineering and cognitive science expertise to join the
development team. If you believe you may be a suitable candidate, or
know someone who may, please read this page, and consider getting in
touch at
. We're searching for
nothing less than the core team to fulfill our mission; we need the
very best we can find.
* * *
The first thing to remember is...
Not everyone needs to be a seed AI programmer. SIAI will probably
need at least 20 regular donors per working programmer, perhaps more.
There is nothing wrong with being one of those donors. It is
just as necessary.
Please bear this in mind. Sometimes it seems that everyone wants to
be a seed AI programmer; and that, of course, is simply not going to
work. Too many would-be cooks, not enough ingredients. You may not
be able to become a seed AI programmer. In fact, it is extremely
likely that you can't. This should not break your heart. You can
learn seed AI after the Singularity, if we live through it and we're
all still human-sized minds - learn the art of creating a mind when
it's safe, when you can relax and have fun learning, stopping
to smell the roses whenever you feel like it, instead of needing to
push as hard as possible.
Meanwhile, before the Singularity, consider becoming a regular donor
instead. It will be a lot less stressful, and right now we have many
fewer regular donors than people who have wandered up and expressed an
interest in being seed AI programmers.
* * *
Seed AI. It sounds like a cool idea, something that would look good
on your resume, so you fire off a letter to the Singularity Institute
indicating that you may be vaguely interested in coming on board.
Stop. Hold on. Think for a second about what you're involving
yourself in. We're talking about the superintelligent transition
here. The closest analogy would be, not the rise of the human
species, but the rise of life on Earth.
We're talking about the end of the era of evolution and the beginning
of the era of recursive self-improvement. This is an event that
happens only once in the history of an entire... "species"
isn't the right term, nor "civilization"; whatever you call the entire
sequence of events arising from Earth-originating intelligent life.
We are not talking about a minor event like the Apollo moon landing or
the invention of the printing press.
This is not something you put on your resume. This is not something
you do for kicks. This is not something you do because it sounds
cool. This is not something you do because you want your name in the
history books. This is not, even, something that you do because it
strikes you as beautiful and terrifying and the most important thing
in the world, and you want to be a part of it, you want to have been
there. These are understandable feelings, but they are, in the end,
selfish. This is something you should only try to do if you feel that
it is the best thing you can do, out of all the things you might do;
and that you are the best one to do it, out of all the people who
might apply for a limited number of positions. The best. Your
personal feelings are not a consideration. Is this the best thing for
Earth?
You may not get paid very well. If there are any dangerous things
that happen to seed AI programming teams, you will be directly exposed
to them. At any time the AI project might fail or run out of funding
and you will be left with nothing for your work and sacrifice. If the
AI project moves to Outer Mongolia, you will be expected to move there
too. Your commitment is permanent; once you become an expert on a
piece of the AI, on a thing that doesn't exist anywhere else on Earth,
there will be no one who can step into your shoes without
substantial delay. Ask yourself whether you would still want to be a
seed AI programmer if you knew, for certain, that you would die as a
result - the project would still have a reasonable (but not certain)
chance of success; but, win or lose, you would die before seeing the
other side of the Singularity.
Do you, having fully felt the weight of those thoughts, but balancing
them against the importance of the goal, want to be an AI programmer
anyway? If so, you still aren't thinking clearly. The previous
paragraphs are not actually relevant to the decision, because
they describe personal considerations which, whatever their relative
emotional weight, are not major existential
risks. You should be balancing the risks to Earth, not balancing
the weight of your emotions - you cannot count on your brain to do
your math homework. If you reacted emotionally rather than
strategically, you may not have the right frame of mind to act calmly
and professionally through a hard takeoff.
That too is a job requirement. I have to ask myself: "Will this
person panic when s/he realizes that it's all really happening and
it's not just a fancy story?"
Certain people are reading
this and thinking: "Oh, but AI is so hard; everyone fails at AI; you
should prove that you can build AI before you're allowed to talk about
people panicking when it all starts coming true." They probably
aren't considering becoming AI programmers. But for the record: It
doesn't matter how much macho hacker testosterone has become attached
to the problem of AI, it makes
no sense to even try to build a seed AI unless you
expect that final success would be a good thing, meaning
that it is your responsibility to care about the complete
trajectory from start to finish, including any problem that
might pop up along the way. It doesn't matter how far off the
possibility is; the integrity of the entire future project trajectory
must be considered and preserved at all points. If someone would
panic during a hard takeoff, you can't hire them; you're setting
yourself up to eventually fail even if you do everything right.
The tasks that need to be implemented to create a seed AI can be
divided into three types:
First, there are tasks that can be easily modularized away from deep
AI issues; any decent True Hacker should be able to understand what is
needed and do it. Depending on how many such tasks there are, there
may be a limited number of slots for nongeniuses. Expect the
competition for these slots to be very tight.
Second are tasks that require a deep understanding of the AI theory in
order to comprehend the problem.
There's a tradeoff between the depth of AI theory, the amount of time
it takes to implement the project, the number of people required, and
how smart those people need to be. The AI theory we're planning to
use - not LOGI, LOGI's successor
- will save time and it means that the project may be able to get by
with fewer people. But those few people will have to be brilliant.
What kind of people might be capable of learning the AI theory and
applying it?
Aside from anything else, they need to be very smart people with
plenty of raw computational horsepower. That intelligence is
prerequisite for anything else because it's what would power any more
complex abilities or skills.
Within that requirement, what's needed are people who are very quick
on the uptake; people who can successfully complete entire patterns
given only a few hints. The key word here is "successfully" - thanks
to the way the human brain is wired, everyone tries to complete
entire patterns based on only a few hints. Most people get things
wildly wrong, don't realize they have it wildly wrong, and resist
correction.
But there are a few people we know, just a few, who will get a few
hints, complete the pattern, and get it all right on the first try.
People who are always jumping ahead of the explanation and turning out
to have actually gotten it right. People who, on the very rare
occasions they get something wrong, require only a hint to snap back
into place. (Our guess is that some people, in completing the pattern,
"force link" pieces that don't belong together, overriding any
disharmony, except perhaps for a lingering uneasy feeling, soon lost
in the excitement of argument. Our guess is that the rare people with
a talent for leaping to correct conclusions don't try to build
patterns where there are any subtle doubts, or that they more rapidly
draw the implications that would prevent the pattern from being
built.)
If we were to try quantifying the level of brainpower necessary, our
guess is that it's around the 10,000:1 or 100,000:1 level. This
doesn't mean that everyone with a 160 IQ or 1600 SAT will fit the job,
nor that anyone without a 160 IQ or 1600 SAT is disqualified.
Standardized tests don't necessarily do a very good job of directly
measuring the kind of horsepower we're looking for. On the other
hand, it isn't very likely that the person we're looking for will have
a 120 IQ or a 1400 on the SAT.
We sometimes run into people who want to work on a seed AI project and
who casually speak of "also hiring so-and-so who's a senior systems
architect at the place I work; he's incredibly smart". The
"incredibly smart" senior systems architect is probably at around the
1,000:1 level; the person who used the phrase "incredibly smart" to
describe that level of intelligence is probably around the 100:1
level. That's not good enough to be an AI programmer; even leaving
aside the ethical requirements, this is simply out of range for
positions that can be filled by hiring friends of friends. Maybe if
you went looking on the Extropians mailing list, or a Foresight Gathering, you would find a few
potential AI programmers. To expect to find an AI programmer at your
workplace... it's not going to happen. Too much of a coincidence.
The theory of AI is a lot easier than the practice, so if you can
learn the practice at all, you should be able to pick up the theory on
pretty much the first try. The current theory of AI we're using is
considerably deeper than what's currently online in Levels of Organization in General
Intelligence - so if you'll be able to master the new theory at
all, you shouldn't have had trouble with LOGI. We know people who
did comprehend LOGI on the first try; who can complete patterns
and jump ahead in explanations and get everything right, who can
rapidly fill in gaps from just a few hints, and who still don't
have the level of ability needed to work on an AI project. You need
to pick up the theory very rapidly and intuitively, so that you can
learn the practice in less than a lifetime.
To be blunt: If you're not brilliant, you are basically out of luck on
being an AI programmer. You can't put in extra work to make up for
being nonbrilliant; on this project the brilliant will be putting in
extra work to make up for being human. You can't take more time to do
what others do easily, and you can't have someone supervise you until
you get it right, because if the simple things have to be hammered in,
you will probably never learn the complex things at all.
So you'll learn AI programming after the Singularity and probably get
more real enjoyment out of it than we did, because you won't be
rushed.
Very few people are qualified to be AI programmers. That's how it
goes.
(We're sorry if you got the point a while back and we've just been
hammering it in since then, but it takes a lot of effort to pry some
people loose from the idea.)
In the third category are tasks that can only be accomplished by
someone capable of independently originating and extending AI theory.
We will have to hope that all of these jobs can be done by one person,
because one person is all that we are likely to have.
We distinguish between programmers, AI programmers, and Friendly AI
programmers, corresponding to the three task types.
Some people are reading this and thinking: "Well, brilliant people are
hard to find; you may have to compromise." You'll be glad to know
that we did compromise. We compromised away from requiring that an AI
programmer be capable of independently extending the AI theory.
Unfortunately we think that's as far as the compromise can go and
retain what seems like a realistic probability of success. We still
worry that being brilliant enough to grasp AI theory is not
enough, and that it may turn out a Friendly AI project cannot in
fact succeed without many people of the caliber needed to
invent AI theory. But it seems like people of that level might
be impossibly hard to find, and we see a realistic chance of succeeding
with the merely brilliant - people who fully comprehend the theory,
even if they would not have been capable of inventing it and cannot
take it further, as long as they have enough hackerish creativity to
apply the theory, translate it into a systems design, without a
Friendly AI programmer needing to hover over them constantly.
The inherent difficulty of the problem is logically unrelated to what
seems like a "reasonable demand", or what's easy to accomplish with
resources we already have. The worker bees of an AI project would be
queens in any other hive, people who would ordinarily be inventing
their own brilliantly original ideas and translating them into
designs. Finding that class of people will not be easy.
Hopefully this leaves some margin for error, so that there is
enough free creative energy in the project to handle unexpected
obstacles. The other problem with "compromising" on people who are
less than genuinely brilliant is that the project would always be
struggling - no safety margin. Such a project will fail the first
time something pokes it.
"What should I study in order to join the team?"
Are you looking to fill one of the nongenius slots? If so, the
primary prerequisite will be programming ability, experience, and
sustained reliable output. We will probably, but not definitely, end
up working in Java. Advance knowledge of some of the basics of
cognitive science, as described below, may also prove very helpful.
Mostly, we'll just be looking for the best True Hackers we can find.
But bear in mind that other requirements are universal - we can't
"just hire" programmers, even if you know some really smart guy at
your workplace, etc. See the later comments about ethics, and the
earlier comments about people who might panic in a crisis.
"What should I study to become an AI programmer?"
At minimum you will need to grasp the elementary ideas of information,
entropy, Bayesian reasoning, and Bayesian decision theory; the things
that bind together the physical specification of a system and its
cognitive content. You should see the "information content" of a
pattern as its improbability or its Kolmogorov complexity, rather than
ones and zeroes on a hard drive; you should be capable of explaining
how the scientific method and conversational argument and the visual
cortex are all really cleverly disguised manifestations of Bayes'
Theorem; you should distinguish between subgoals and supergoals on
sight and without needing to think about it.
You should be familiar with the design signature of natural selection
- optimization by the incremental recruitment of fortunate accidents,
following pathways in fitness gradients which are adaptive at each
intermediate point and which are directed in the maximally adaptive
direction at each intermediate point. Not because we're going to use
evolution, of course, but because you need to know what a nonhuman
design process looks like and how it behaves nonanthropomorphically.
Evolution is the only current example of this around.
You should be familiar with evolutionary psychology and game theory,
not because we're planning to build an imperfectly deceptive social
agent or make it play zero-sum games, but so that you can have applied
this knowledge to debugging your own mind, and so that you can detach
your knowledge of morality from the human domain.
You should be familiar with at least a few specific examples of the
intricacy of biological neural circuitry and computing in single
neurons and functional neuroanatomy, not because we're going to be
imitating human neurology, but so that you don't expect cognition to
be simple. You should be familiar with enough cognitive psychology of
humans that you know human reasoning is not Aristotelian logic - for
example, Gigerenzer's fast and frugal heuristics, Tversky and
Kahneman, and so on; being able to interpret all of this as an
imperfect approximation of Bayes' Theorem is another plus.
You should have read through Levels
of Organization in General Intelligence and understood it fully.
The AI theory we will actually be using is deeper and less humanlike
than the theory found in LOGI, but LOGI will still help you prepare
for encountering it.
For a starting point on what you should read and understand deeply, see the
last question in the SIAI section of the Eliezer Yudkowsky Q&A, the Singularitarian Reading List (read everything there, and then plan to read four times as much technical material), and Michael Wilson's SL4 Wiki notes (Wilson is our second project team member).
The four major food groups for an AI programmer:
Cognitive science
Evolutionary psychology
Information theory
Computer programming
Breaking it down:
Cognitive science
Functional neuroanatomy
Functional neuroimaging studies
Neuropathology; studies of lesions and deficits
Tracing functional pathways for complete systems
Computational neuroscience
Suggestions: Take a look at the cerebellum, and the visual cortex
Computing in single neurons
Cognitive psychology
Cognitive psychology of categories - Lakoff and Johnson
Cognitive psychology of reasoning - Tversky and Kahneman
Sensory modalities
Human visual neurology. Big, complicated, very instructive; knock yourself out.
Linguistics
Note: Some computer scientists think "cognitive science" is about
Aristotelian logic, programs written in Prolog, semantic
networks, philosophy of "semantics", and so on. This is not
useful except as a history of error. What we call "cognitive
science" they call "brain science". We mention this in case you
try to take a "cognitive science" course in college - be sure
what you're getting into.
Evolutionary psychology
Popular evolutionary psychology; dating and mating; Robert Wright and Matt Ridley
Formal evolutionary psychology; neo-darwinian population genetics
and complex adaptation; Tooby and Cosmides
Game theory; nonzero-sum and zero-sum games
Evolutionarily stable strategies for social organisms
Tit-for-tat, the evolution of cooperation, the evolution of cognitive altruism
Evolutionary psychology of human "significantly more general" intelligence
Mostly this means reading LOGI; there's not much else out there.
But see also Lawrence Barsalou and Terrence Deacon.
Evolutionary biology
Incrementally adaptive pathways; levels of organization; etc.
Biology (a complex system not designed by humans)
Genetics
Gene regulatory networks (another good look at evolution's
bizarre signature, and also a look at the way humans actually get
constructed)
Quantitative genetics
Anthropology - the good old days
Information theory
Shannon communication theory
Shannon entropy
Shannon information content
Shannon mutual information
Kolmogorov complexity
Solomonoff induction
Bayesian statistics
Interpretation of human thought as Bayesian inference - see Jaynes.
Any other kind of statistics
Utilitarian Bayesian decisionmaking
Decision theory is not classically part of "information theory",
but does, in fact, belong together with the other items in this
category
Actions and desirability - read Creating Friendly AI as a preliminary to the
longer story.
Computer programming
Knowledge of many languages
Java programming (that's probably what we'll end up doing it in)
Being an excellent programmer
Parallelism
Multithreading
Clustered and distributed computing - we may not need this for a
while, but then again, we may
Any kind of experience working with complicated dynamic data
patterns controlled by compact mathematical algorithms - some of
the interior of the AI may end up looking like this
Other stuff
Computer security (experience with defensive caution; not that
it's sufficient for Friendly AI or even a good attitude, but it's
a start)
Physics
Thermodynamics
The second law of thermodynamics
Noncompressibility of phase space
The arrow of time and the development of complex structure
Traditional AI methods
History of error; don't repeat past mistakes
We might reuse one or two design patterns at some point
Transhumanism or transhumanist SF - sufficient exposure to have a
very high future shock tolerance; helps to "take it all in
stride"
Mathematics
Obviously we are not requiring a doctorate, in cognitive science or
any other field, because no doctorate is going to tell you one-fifth
of what you need to know.
What you need is not an academic specialization in any one of these
fields, but an interested amateur's grasp of as many of them as
possible. An AI programmer doesn't need to independently pull
together a single consilient explanation from this mess of separate
disciplines. What is useful is if you understand the basics of these
separate disciplines as they are usually understood, so that a
Friendly AI programmer who does know the consilient explanation has a
shared language in which to describe specific cognitive processes.
If you are to be able to implement cognitive processes on your own
without constant supervision, you will need to be very fast on the
uptake, fill in patterns from a few hints and get them right the first
time, learn new skills rapidly and easily, discriminate subtle
distinctions without needing them hammered into you, and have a great
deal of plain good-old-fashioned mental horsepower. Even so, much of
your first days on the project will still consist of having your
suggestions shot down over and over again, until you pick up the
knack. For every AI problem there are a million plausible-sounding
methods that do nothing; a thousand methods that are sort of
right or almost right or work a little but then break down; and a
small handful of manifestations of the right way. It's going to take
a while to catch the rhythm of how this works, and learn to
distinguish solutions that accomplish something from the ones
that spin their wheels and go nowhere.
"Gee, y'know, I think I'll start my own AI project, in like my
spare time while I'm finishing high school. How much effort does it
take to become a Friendly AI programmer?"
If you are not willing to devote your entire life to doing absolutely
nothing else, you are not taking the problem seriously. By that we
mean your entire life. We don't mean that you do it as a hobby.
We don't mean that you do it as your day job. We don't mean that you
give it all your time and energy. We mean that you allocate your
entire self to be sculpted by the problem into the shape of a Friendly
AI programmer.
There is nothing unusual about that. Consider human history, and how
many people have sacrificed so much more for so much less. Total
dedication is something that plenty of people can and do undertake,
and it isn't anywhere near as painful as commonly thought.
That is what it takes to start your own AI project, as a necessary but
nowhere near sufficient condition. You will also need to be as smart
as it gets. In terms of raw horsepower, there are probably tens or
hundreds of anonymous world-class geniuses around for each one that
makes it into the history books. It also takes luck, perseverance,
the right place at the right time, a field in chaos so that it can be
set in order, etc. But it is nonetheless true that only a world-class
genius has any hope at all.
"How much effort does it take to become an AI programmer?"
If you want to be an AI programmer you should be willing to devote
most of your life and yourself to AI. All of yourself would be
better, but is not strictly necessary, and we are aware that it is
considered hard.
"What does it take to get a security clearance?"
There are two approaches to ethics. The first is to try and set up a
set of tests to try and keep out noticeably bad people. That is, you
start with an "allow all" policy and then add denials.
The second approach, which makes us feel a bit ashamed because it
seems exclusionary and unfair, is to say: "Even if someone passes
every formal test we can think of, we're not going to hire someone
unless they strike us as unusually trustworthy, because anyone
else is just too much of a risk."
Having considered this at some length, we think that only the second
option is safe. If something feels wrong, but you can't really think
of a "justification" for avoiding it, avoid it anyway. Doing
something that makes you feel a little bit uneasy, without really
being able to say why, can be much more dangerous than taking a known
calculated risk. Often it means you know so little that you don't
have any idea how much danger you're in or how large a risk you're
taking.
We know some people whose ethics are such that we would actually feel
good about adding them to a Friendly AI project. The thought
of adding anyone else makes us feel a little bit uneasy.
You've probably read "The Lord of the Rings", right? Don't think of
this as a programming project. Think of it as being something like
the Fellowship of the Ring - the Fellowship of the AI, as it were.
We're not searching for programmers for some random corporate
inventory-tracking project; we're searching for people to fill out the
Fellowship of the AI. Far less important projects get hundreds of
millions of dollars and thousands of programmers. What is our
substitute? Knowing exactly what we're doing. Having
exceptional people on the team. Otherwise it simply isn't
realistic. You can't fill the positions we need to fill by running a
classified ad.
From the standpoint of personnel management, the original Fellowship
of the Ring was a disaster. The only real heavyweights were Gandalf
and Aragorn; two out of nine. Legolas and Gimli were added to fill
minority quotas. Boromir was allowed onto the team despite being
blatantly unreliable. No fewer than three useless hobbits
shoved themselves onto the team, and nobody screwed up the courage to
kick them off. The Fellowship splintered almost immediately after it
was assembled. In real life, the Fellowship would have been
strawberry jam on toast. We can, and must, do better.
This is Earth's finest hour, and Earth's most qualified should meet it.
* * *
Much of what's written above is for the express purpose of
scaring people away. Not that it's false; it's true to the best of our
knowledge. But much of it is also obvious to anyone with a sharp
sense of Singularity ethics. The people who will end up being hired
didn't need to read this whole page; for them a hint was enough to
fill in the rest of the pattern.
Of course you would have to dedicate your life to a Friendly AI
project. Of course you have to be brilliant. It's a
Friendly AI, for goodness's sake! It's asking enough of human
flesh and brain that we build one at all, let alone that we do it with
less than our full effort.
The standards are set terribly high because this is a terribly high
thing to attempt. We cannot be "reasonable" about the requirements
because Nature, herself, has no tendency to be "reasonable" about what
it takes to build a Friendly AI. This is and always was an
unreasonable problem.
No one who might actually be hired will be scared off by the thought
of an extremely difficult job or harsh competition. If someone is
brilliant enough to have any realistic chance of becoming an AI
programmer, and ethical enough to be accepted, nothing we could
possibly say would scare them away from applying. They are not
thinking in terms of "scared/not scared"; they are thinking in
Singularity ethics.
If that describes you, please subscribe to the mailing list
by sending
mail to . Plenty of people
have expressed vague interest, but SIAI has very few serious
candidates. Right now we do not have enough AI programmers to put a
real team together (we only have two at this time, including Eliezer
Yudkowsky), and that is a blocker problem even if we had a million
dollars tomorrow. Subscribe even if you're not sure you're totally
brilliant; if we eventually accumulate a bunch of totally brilliant
people and it's clear you're outclassed, you can always withdraw
yourself from consideration at that point. Meanwhile, it may prove
useful to have gathered a group of the smartest people available at
any given point.
If that doesn't describe you, then again we emphasize that not everyone
has to be an AI programmer! You don't have to be an AI programmer to
help! It may be frustrating to see something exciting, like the
superintelligent transition, and not be able to run off at it
immediately, but even getting the project started is a serious
strategic problem with many simultaneous dependencies. Don't try to
solve the entire problem in a single step. See if you can figure out
how to make one definite contribution. Then you can go on to consider
expanding it.