C21CH

Speculative Web Space

NUPSA Philosophy Sessions (c) William Pascoe, 2018 (independently written and produced, this content is not owned or provided by UON or any Government or Commercial entity, except externally sourced material)

Home Resources Bibliography

 

Artificial Semiosis

A parsimonious theory of artificial intelligence, learning, cognition, sentience and free will.

While there is a lot of talk, promotion and fear around artificial intelligence at the moment, and its use is becoming common in many aspects of our lives, for good or ill, there doesn't seem to be a thorough explanation of what intelligence is, how it comes to be, and how we could make it artificially, in the fullest sense - something humans create from scratch, that is not human, yet thinks and acts for itself, just as humans do. Having said that, to figure out how to make intelligence we need to figure out how intelligence works. To do that we should start with simple examples. It's not reasonable to think we will suddenly be able to invent a walking talking feeling robot like C3PO. What is intelligence? One way to begin answering that is to ask what is the simplest possible system that could be described as intelligent? We should begin with relatively 'stupid' intelligence. A dog is somewhat intelligent, cephalopods are relatively intelligent - worms and lobsters have nervous systems, but are they intelligent?

Drawing on software development, evolutionary biology, thermodynamics, neurology, psychology, semiotics and philosophy, Dr Bill Pascoe attempts a full, coherant account of how and why artificial intelligence, learning, cognition, sentience and free will are possible.

The following presents a brief understanding of how artificial intelligence is possible. When speaking of artificial intelligence it is generally accepted that there are two main approaches:

These are sometimes distinguished by the terms 'weak' and 'strong' AI respectively, such as by Searle. I just don't think they are good or adequate words though, since strength and weakness don't have any reason for the distinction between them and as we'll see later, Searle is one among many who's thinking we need to move away from to understand this topic.

Part of the problem is understanding what 'intelligence' is. What do we mean by that word? We tend to mean something like what humans can do with their minds. It seems to be what it means to be human. It's almost like saying, 'Artificial Human', except by intelligence we tend to exclude other human capacities, such as emotions - yet perhaps emotions are essential to the functioning of intelligence, so we can't really rule out any other human capacity. At the same time we recognise other animals, have some sort or degree of intelligence, so it's not strictly human. We might explore a very limited sort of intelligence, 'artificial stupidity', ie: having intelligence, but not much of it - lobster's have neural nets after all.

Nobody is really clear what 'intelligence' means but it is a field of research. Turing lazily sidesteps the problem by saying well if a person would say this person and that machine are intelligent then they are. As we do research in artificial intelligence we learn about what intelligence is and what we mean when we say it. People who can play chess well are intelligent. So playing chess is a property of having intelligence. Some early successes in AI were in chess playing and it's well known that computer chess can beat chess masters. Yet something that can win at chess and that alone according to the rules it has been programmed with, doesn't seem to be 'intelligent'. So we have learned something useful about computation and permutations that can be used in industry, and we have learned that what we mean by 'intelligent' isn't just 'able to play chess' and must be something more than 'following programmed instructions' (a point that becomes relevant to free will).

AI research is an iterative process where we can't even define the topic of study where we begin, but we learn what it is we are studying as we go. [see Heidegger on the hermeneutic circle, which relates also to Uexkull's Umwelt] AI enables us to understand ourselves as well, and so could be considered a branch of applied philosophy. If we can make an artificial version of ourselves that works in some way, we have understood something, if it fails, we have also understood something, or at least that we have misunderstood something.

Because of this iterative approach its important to have an undertanding of a wide variety of fields. It's not as if a problems is presented that requires a certains skill set, or that it so well understood it can be analysed and broken up among experts and technicians. Scientists must learn philosophy. Philosophers must learn engineering. Software developers must learn psychology and so on. I cannot overstate the importance of this. Without it, common errors are repeated and time is wasted on wild goose chases. For example, scientists often have a naive understanding that the meaning of something is it's denotation, but a quick conversation with any arts students would quickly educate them and fundamentally change the direction of their research. Similarly, philosophers often have naive understandings of software development. The way that the philosophical implications of 'useful' by contrast to 'end in itself' might not even dawn on someone who has studied science and engineering only. A philosopher making some case that mind can never be implemented in software because it is serial, discrete and binary might miss the crucial point about resolution and speed which easily dispel these arguments at least at the theoretical level the philosopher was arguing. Needless, to say, there are many people who have overcome these 'traps for the younger player', so I mainly want to stress their importance to students who may still be entrenched in their own domain. Rather than accuse people of naivete I simply urge you to try not to be naive, as I urge myself each day. We can all sometimes be guilty of a dismissive attitude to schools we didn't grow up in.

If we are interested in the second type of AI we immediately confront a range of other related questions and branches of study. What is 'intelligence' in the full human sense? It seems to involve learning. Learning what? To be intelligent about something we have to have it exist in our minds - so suddenly we need to understand all about the philosophy, psychology and neurobiology of cognition. Does it require that we have free will? What does the colour yellow mean to an intelligence thinking about it? If we are to think something it must have some meaning. 'Yellow' for who - if a machine categorizes images according to colour that isn't the same thing as my experience of yellow. What is it to be self aware, or 'sentient'? This approach to AI involves intelligence, learning, cognition, sentience and free will. It may involve other things like emotion and morality.

Because it involves so much more than 'intelligence', and because nothing can exist for us unless it has some meaning to us, and 'semiotics' is a general, ill defined, term that can include any and all of these things, and because all these things can't be understood in isolation, and because information technology which is widely used in AI can be regarded as a branch of applied semiotics, and because we will be looking at this a process I like the term 'artificial semiosis' - or an artifial implementation of the process of meaning. I hope this makes more sense by the end.

Some of you may remember from epistemology that if we want to check if the statement 'All crows are black' we could a) check every crow to see if any are black, or b) find a single white crow. If we try to check every crow we can never be sure if we have checked them all and it will take a very long time. If we find just one white crow we know it is false. (We leave aside here the angle that 'black' is an essential part of the definition of being crow, such that a white crow would not be a crow by definition).

This shows statements about the world are often easier to disprove than to prove. Often we see people, some of them philosophers making arguments that AI is not possible due to the nature of the mind, the brain, and the computer, or some other argument. But to prove this we would have to check every attempt at AI and see if it failed, now and into the future. Since people will keep trying to invent it, and since telling someone something is impossible is as often as not a spur to make them prove you wrong, we will never be sure if AI is impossible, regardless of how much theory aims to 'prove' it.

On the contrary, we only need one example of AI working to prove it is possible. Although critique for and against, Socratic dialogue and dialectic, arguments against used to test the rigour of a theory towards the truth and knowledge, are very valuable - the best approach is to have a go at making AI. Instead of looking for arguments to prove AI is impossible, we would focus our efforts on asking, assuming AI is possible, how would we make it? What would it be? And so on.

Firstly, it is assumed that it is material. There is no magic involved. There is no mind seperate to the physical world. If we are to build artificial intelligence, our assumption must be that it can be done by manipulating the physical world. (If intelligence requires a non-physical mind or magic we will never be successful, but we can never prove that with finality. We can only prove it is physical, and can prove that only if it is.)

AI and Artificial Semiosis are then holistically interdiscipinary fields. It is very difficult to write an account of it because so many aspects from so many fields are all relevant and related at any given time - evolutionary biology, neurochemistry, software development, information theory, cognitive psychology, philosophy, romantic literature. Where to begin? We must begin somewhere, and as we proceed, it hope it becomes clearer to the reader referring back, that we are gradually building up different aspects of something understood as a (incomplete) whole. It makes sense though that if we aim to build something like ourselves, our own minds, our humanity, our ability to think and be self aware, that it would involve all disciplines from poetry to physics.

Researching AI enables us to understand ourselves as well. If we can make an artificial version that works, we have understood something, if it fails, we have also understood something, or at least that we have misunderstood something.

We don't even know what we are talking about - this makes it hard. It is not clearly defined what 'intelligence' is. But we work it out back and forth. Being able to win at chess is not 'intelligence'. More broadly, what is human?

An ill defined field, we won't know what to call it, nor what the definition of that word is, until we have arrived at some definitive point, until then it is a matter of exploring aspects of various concepts and investigations in science and philosophy and any relevant area until things become coherent.

If we did know clearly what 'intelligence' was, there wouldn't be much to research.

- the minimum simplest sentient system - dogs, lobsters, - intelligence, learning, cognition, free will are all part and parcel of the same theory of sentience, hence it's parsimonious - This is not reductive, it remains the best way to communicate the human experience to each other is humanities, language, images, theatrics, cooking.

None of this is my own original work. Almost every thought or insight here is somebody else's. I only think that what is lacking is putting all these pieces of the puzzle together into a coherant whole account. Even if I did draw some conclusions on my own, (perhaps the idea of focusing development on OUIP instead of IPO, to change the probability of mappings in the world, as well as within the system) there are more than 7.5 billion people in the world and many of them are interested in AI, so no doubt someone will have thought of it as well, or before. As I must spend my time working there isn't time to go and check who has already has these ideas as we'd normally do, but I worry that no-one is setting out this full account, and are only focused on one part of it, or are sidetracked by all the hype about whether robots are going to save us or kill us all. Like everyone I'm going to die all too soon. I worry that there was once a person in the world who could explain all this and they died being too busy too tell anyone. So here it is.

Here are some major works that I remember getting a lot of ideas from, I can recommend to anyone interested in AI. Not all of these because they are right, but sometimes they were wrong in important ways.

In particular people with IT backgrounds should read the philosophy and science, people with philosophy should read the science and IT, etc - to aviod naive assumptions from your own disciplines misunderstanding of the other. Since the global supersystems within which we are small nodes constrains us to specialise, as we will be most 'useful' (and so most 'used') that way, it's rare and difficult for us to have such broad reading - so we have to make a special effort of resistance, active research and criticism, to gain a good understanding of ourselves.

Any textbook that gives a basic 101 understanding of:

Considerations

Genius
See also IOT 'Inspiration and Genius'
From IOT on Originality:
Being the origin (novelty comes later)
Wordsworth on originality - innate? related to commercial novelty instead of land inheritance? Scientific innovation?
Coleridge, interjection. 
harkening for past, more than future.
Hegel's critique is that romantic view gives illusion that creation is unhistorical. Are a product of traditions.
TS Eliot, tradition and the individual talent
Hiedegger - seems to point to the unique individual, but actually it's in the working of a work of art. Art happens when truth happens through the working of the art, it's in the historicity and depends on the one who appreciates the art. The 'origin' is in the historically determined moment.

# easiy to find elsewhere.

Flight provides a good analogy when speaking about things like intelligence, sentience, cognition and so on.

These are things we are capable of, just as flight is something birds are capable of, and 747s are artificial flight. It is not as if there is a thing inside a bird we can point to that is flight. It is an outcome of processes, that result in flight. Flight is none the less a real thing. Understanding flight enables us to build artificial flight. It may be that it is not exactly the same, but flight none the less. So too we might expect that artificial intelligence/cognition/sentience/free will/etc is recognisably so, but we shouldn't expect it to be exactly the same as human ('It's life Jim but not as we know it').

We should not expect an explanation of how and why a thing is and comes to be to look much like the outcome, or what it feels like.

For example, a mathematical formula describing the aerodynamics of a bird's or 747's wings doesn't look much like something flying through the air. It doesn't feel much like flying. It explains how flight is possible, and enables us to build such things as 747s.

Similarly, the configuration of a neural net, or an account of probabilities in systems doesn't look much like our experience of yellow or our desire for a drink of water, but it may explain how it comes to be, and enable us to build something capable of such experiences. To determine it is actually having them we may ultimately have to ask it, just as we would have to ask someone about their experience of yellow. If we were to have direct experience of someone else's experiences we would be that person. We can only ever surmise we have similar experiences, though there is good reason to think we do (see discussion elsewhere about translation).

Flight is a process Flight is not a noun, a thing that is inside a bird as a component that can be added or removed. It is not an adjective, like a colour, shape or other property. It is an ability that arises from an interactive process - it depends on shape, yes, but that is part of the process of forward momentum, air flow, the bernoulli affect, ratios of weight, fuel supply etc. If this process can be implemented with steel and jet fuel, rather than feathers, flapping, blood, and so on, it is still flight. Intelligence/sentience/semiosis etc, also, depends on some material instantiation but it is an ability emergent from processes. This is important in asking whether it is possible to make it artificially. If it were a material substance, it would mean only brain matter could be intelligent. Because it is a phenomena that emerges from a process, it means it doesn't matter what material it is implemented in, so long as it enacts some process which produces that phenomena, or ability - to fly, or to think.

The Turing Test

Most of us are probably familiar with the Turing Test. Alan Turing, was one of the first figures to formalise the science of computation, with his Turing Machine, devised ways to break German codes in WWII, and was instrumental in designing some of the earliest computers. He is also famous for the 'Turing Test' for artificial intelligence. In this test a wall or barrier is set up. A person can send and recieve messages to what could be either a computer or a person on the other side of the barrier. If the person cannot tell if they are communicating with a computer or a person, then the computer can be considered intelligent. For Turing it doesn't matter what we think intelligence is, nor all the philosophising we might do about whether mind is material or immaterial, or machines can think - if they are indistinguishable, and humans are intelligent, we must accept the machine is too.

It's easy enough to see problems with this approach. If the person could pretend to be a computer, would we really agree that the person is a computer?

When I was about 13 years old, at the entrance to an old theatre I saw a mechanical wax dummy that was incredibly life like, it seemed almost to be a person. Naturally intrigued I walked up to see how it worked. It tipped it's hat, said good evening, stepped off it's pedestal and walked out the door. Was it a robot just because I thought it was? There is some compelling aspect to this philosophically, that regardless of what checks and tests I do, I still can't get around the problem of subjectivity - that I can have no certainty about reality beyond what I think and percieve it to be.

None the less, it's easy enough to see that a magic trick is not enough for me to accept my perception. We don't trust our own perceptions until we have adequate evidence. What would this evidence be? The Turing Test is useful in highlighting that it is not an adequate test. It highlights that we need to have some idea or understanding of what intelligence is and be able to check for it. We need to have a theory of intelligence, so that we can check if intelligence is there.

We will see later, that what happens inside a system is a crucial part of this theory.

The Chinese Room

In the Chinese Room thought experiment the philosopher Searle asks us to imagine a person in a room who recieves Chinese characters. The person cannot speak Chinese but looks up the character in a book and matches it to other characters which they then provide as output. To someone who can speak Chinese outside the room, these answers might make complete sense. Searle points out that we wouldn't say that the person in the room really understands Chinese just because they can match the right characters together and that this is all that a computer does - manipulates symbols according to rules, and so we can't say that a computer can be intelligent.

One problem with this is that if we made an intelligent computer we wouldn't consider it intelligent if and only if there were someone inside it that was intelligent producing all the outcomes. The question then would be what makes that intelligent thing intelligent? Is it some other intelligence inside that intelligence - and so on ad infinitum? The real question is whether the Chinese room as a whole is intelligent. It's irrelevant whether the person handling cards understands Chinese or not. They may as well be a robotic arm. The question is whether the Chinese room, considered as a whole system - the input, the matching of cards, is an adequate explanation of how intelligence works. We must accept that intelligence is not composed of 'intelligence' but of functioning parts which acting together in certain ways constitute 'intelligence'. I don't think the room is intelligent, but the point is to ask what sort of an arrangement in the Chinese room could be called 'intelligent', or in this particular case 'understands' Chinese - without anyone in it who provides the 'intelligence'.

Programmed Paradox

How could we say that the Chinese characters have meaning for the system in the room, rather than just the person feeding them in and getting them out. We see here that 'for the room' rather than 'for me' the programmer/observer/experimenter is a crucial point. The character would need to determine some course of action in the room's own interest. We can see already this applies to cognition also, at least to some degree or as part of the explanation - it might appear to us outsiders that the entity can have a thought about something, or be experiencing some perception - but as it is with people, we can only surmise this from it's actions. If thoughts and actions can somehow be said to be for the entity itself, beyond my interpretation, beyond it's appearance to me, then at least in some limited sense, we can say the entity has had a cognition/thought/concept/understanding of it - that it means something to it, not just to me.

What that theory explains intelligence still isn't completely clear, but every attempt to create intelligence teaches us more about what we mean by 'intelligence'. For example, intelligent people can play chess - yet we don't regard a chess playing machine as 'intelligent'. It is only following and playing our carefully programmed instructions. So clearly simply having some properties or abilities of other things that are intelligent, doesn't seem quite enough. Yet it's implicit that if we are to build artificial intelligence it will be something that follows the instructions we have given it, most likely a software program. It seems a contradiction already - that AI must be programmed to follow instructions, yet must not merely follow instructions. But this is only a paradox - a seeming contradiction. Perhaps we could program something to do more than follow instructions, in such a way that those instructions enable it to think, act, solve problems and so on for itself. This 'for itself' as opposed to 'for the programmer' is an important distinction, when asking is it 'really' intelligent or 'just a simulation' of intelligence. To decide if it 'really' is, or is a trick or simulation, we must have some understanding, theory or criteria that determine what intelligence is to make a judgement - how can we recognise it if we don't know what it is? My aim here is to attempt a coherant account of what intelligence and all those things associated with it, is. Students may remember that the Greek word for knowledge, 'episteme' implies not only that we beleive something to be true that actually is in reality, but that we can give a plausible explanation.

(some key texts on this: Schrodinger, Prigogine, Maturana and Varela, Bickhard)

If we want to make artificial intelligence, we must proceed under the assumption that intelligence arises from phenomena in the material, physical world (it doesn't require magic to imbue matter with an immaterial soul or mind etc). It makes sense then to try to understand how intelligence, as it exists already, came about in this physical world. That will probably give us clues about what it is and how it works, that we can then emulate in our artificial system. It seems hard to imagine how something as complicated intelligence can emerge from a bunch of stuff, just interacting atoms. Many scientists have put a lot of thought into this, so here is a story about how it is possible for intelligence to arise from the stuff of the universe:

The second law of thermodynamics says, "Heat cannot spontaneously flow from a colder location to a hotter location." This apparently obvious and innocuous statement has profound ramifications - time has a direction, that the whole universe will die a 'heat death' where all interactions that might happen have happened, and the Universe ends 'not with a bang but a whimper', settling into a cold still death, like a bowl of soup left to sit too long. The 2nd law is stated in various ways: "The second law of thermodynamics states that the total entropy of an isolated system can never decrease over time."

"At a very microscopic level, it simply says that if you have a system that is isolated, any natural process in that system progresses in the direction of increasing disorder, or entropy, of the system."

"The second law of thermodynamics states that the entropy of any isolated system always increases."

Some ways to convey what this means are that once interactions happen they are hard to undo, without the use of energy from some external source (there is no external source to the Universe as a whole). If you break an egg it's hard to undo. If you mix things together they are hard to seperate. If there is heat on one side of something, the atoms all jostle each other about until heat is evenly distributed.

The 2nd law of thermodynamics means that order tends towards chaos, and this chaos can't be undone.

The concept of 'entropy' is crucial to the 2nd law. Note that the meaning of 'entropy' is different but analogous in Information Theory, to it's meaning in Thermodynamics. It is a simple concept but can be hard to explain. Entropy is a measure of the disorder of a system. It is also a measure of how much can potentially happen. If things are seperated they are highly ordered. As they mix their entropy increases. If there is a container seperated in two with a hot gas on one side and a cold gas on the other, when the barrier is removed, the excited atoms on the hot side fly around colliding with the cold atoms on the other side. They move around and transfer that energy to each other, mixing up until their movement is evenly distributed across the whole container - order goes to chaos, entropy increases. The opposite of entropy (order) is sometimes called 'negentropy'.

Entropy is a statistical measure. It is a probability. If I look at a point in one side of the container, what is the probability it will be this thing or the other. If the probability is high, ie: I can predict it will be this and not that, there is low entropy. It is highly ordered. If thing are disordered, or mixed up, the probability is random.

So the big question is if the interactions of atoms tend towards an even, uneventful chaos - if the tendency of the Universe and all in it is towards entropy, how can something so highly ordered as life and intelligence come about?

The answer is that order and complexity emerges from disorder, on the path towards overall disorder. For example if some interactions happen faster than others, you'll see an 'order' in the sense that this flow here is distinct from that flow there. Sometimes, some small difference leads to an overall big difference in the long run. Consider water eroding a perfectly smooth rock. A drop of rain falls. Due to some infinitesimally small difference in atoms, or a tiny difference in the gradient of the rock, the raindrop rolls of the rock in one direction. As it rolls, it erodes a few atoms of the rock. These few missing mean that the next raindrop will tend to roll that way too, eroding more. The more raindrops fall, the more they fall that way, the more they erode that path way, until a gully is carved out of the rock. From a random situation, it becomes more and more probable that the rain will flow away in one direction. A pattern, or an ordered shape, is carved out of a uniform rock. Ultimately though, this order is temporary, and emerges on route to the complete erosion of the rock. So order can emerge from chaos.

How can something so complicated, so improbable as life emerge? DNA is a very complicated material. A living cell is a very complicated and highly ordered process. It's extremely unlikely that such a configuration of atoms should spontaneously happen. Statistics can help explain how unlikely coincidences are quite likely to happen. Statistical error occurs with small sample sizes. If you toss a coin we expect an equal amount of heads as tails over all. If we toss the coin a hundred times, the total tends to even out around 50 each, give or take a bit. However if we toss a coin only 4 times, sometimes we'll get 2 heads and 2 tails and sometimes we'll get all heads - ie: %100 heads, which seems highly ordered. If you were to take a random sample of 4 coin tosses from a hundred, sometimes we will find that strange coincidence of 4 heads in a row, even 5 or 6 or 10 heads in a row. It is actually highly unlikely that the coin toss will evenly alternate between heads and tails. IE: it is paradoxically probable that improbable coincidences will occur. Now it is unlikely that we will ever see 100 heads in a row - yet if we toss a coin a hundred billion times, we'd start to expect to see such an extremely unlikely coincidence. Over the billions of years the Earth has been around, how many interactions between atoms have there been? It is likely that at some point, some unlikely configuration will happen to begin the processes necessary to life.

Life is indeed extremely unlikely. Consider someone who wins the lottery - the chance is one in a million, yet someone wins it and finds it hard to believe they have one since it is so unlikely. So too, it is unlikely that life will emerge on a planet, yet of all the planets in the Universe, over all of time it's likely to happen somewhere some time. That person living wonders at how improbable their existance is.

The formation of a membrane, for some theoretical biologists is as important as the formation of some sort of DNA prototype. A membrane creates a distinction, it establishes an order of things on one side and those on the other, inside and out. A membrane allows some things to cross over and others not to. Some naturally processes lead to the formation of such a membrane. Again probability becomes very important.

Life is a process, where nutrients and fuel are taken in, processed and expelled. By some probable improbable chance a negentropic process emerges. If the outcome of this process is to increase the probability of that process continuing, it is more likely to exist. If not, then it will cease to exist. So improbable situations that increase their own probability are more likely to be found in the world. An example of a process the outcome of which is the increased probability of it's own existance is a flame. A flame heats air. This causes air to rise. This sucks away the material produced by the reaction (smoke) and creates a convection current where oxygen is sucked in from below, to combine with the fuel thus perpetuating the reaction and the whole process. (Note that this is a temporary ordered process, but ultimately ends in entropy as the fuel is burnt out.) So too, life is an improbable, far from thermodynamic process that increases the probability of it's own existence.

So with a protective membrane surrounding some sort of proto-DNA, the unlikely process of life becomes more and more likely. Through improbable events, those that increase the probability of existing, result, in those things existing more.

Evolution is an extension of this - those changes which increase the probability of existing, are more likely to exist, and so ultimately exist. Those that don't increase the probability of existing, die. Overall, through generations, this results in doing more of what is good for survival and less of what is bad.

Most AI is modelled on the processes of neural nets that occur in nervous systems. The neurons in our nervous system are connected to each other, each one has inputs, electrical signals, from other neurons. It has an internal state which, when raised to a certain threshold from inputs, is triggered, outputting electrical signals to other neurons it is connected to. Neurons either inhibit or excite. They do so with a certain weight. Although the output of a neuron is binary (either firing or not firing), it arrives at that state by degrees, and it's role in the network is also a matter of degree. A nueron may be triggered more or less easily through the accumulation of inputs, and may inhibit transfering of signals. It may also be connected to many neurons or few. The key to learning is this 'weight' may change over time. This weight is adjusted by reinforcement. If things the firing of this neuron corresponds with things going well, it is reinforced - the nueron increases is firing potential, or decreases it further, as the case may be. It also grows, connecting to other neurons. Overall, through these changes and reinforcements, complex signalling pathways and blocks are established. Importantly these correspond to what is 'good' for the animal, or inhibiting what is 'bad'. Eg: Note that this activity corresponds to the outcome of evolution, simply put, doing more of what is good for survival and doing less of what is bad. (this is not a bad definition of 'intelligence'). Except with neural activity we can do more of what is good and less of what is bad without dying. Intelligence can be seen as an internalised abstracted extension of the evolutionary process. Needless to say, those entities which can do more of what is good for survival and less of what is bad, without dying, are more likely to exist. (Note that since this is explained through probability and explains how neurons sustain negentropic systems, we have a thermodynamic account of mind.)

So it's likely that complex order should emerge on the path to absolute chaos, that such an unlikely thing as intelligent life should exist in a doomed universe, and that this intelligent life should be perplexed about how it came to be.

Free Will Systems

Free

An alternative name for the systems I'm trying to describe here might also be a 'free will system'.

Sartre's account of free will says that, as humans, we are capable of making choices because we are capable of imagining things being other than they are. His book 'Being and Nothingness' discusses freedom at length and hinges on this very simple, that seems obvious only when it is said. There is what is - 'being' and what is not, 'nothingness'. Our 'will' depends on nothingness - ie our ability to thing of things other than as they are. We imagine a situation that 'is not' and want to make it so and take action to make it so.

"This means that from the moment of the first conception of the act, consciousness has been able to withdraw itself from the full world of which it is consciousness and to leave the level of being in order frankly to approach that of non-being. - Sartre, Being And Nothingness

Determinism is Moot

If everything in the Universe happens in a long chain of cause and effect over time, that everything is predetermined. If someone knew the position and direction of everything in the universe, they would, in theory be able to predict everything that will happen. If everything is predetermined from the beginning then how could there be free will? How could we conceive of things being different to what they are and make them different? How could our actions be said to be our own, if they are all a matter of predetermined interactions of atoms? How we could change what happens?

It is a common problem in philosophy and in general thought to think in terms of absolutes and binary oppositions - if the movement of every atom is predetermined then there is no free will. Things might really be a matter of degree. The movements of atoms at a great distance to us have infinitesimally small causal influence on us.

One answer to this is that we are not infinitely knowing and calculating entities and so even if the universe is predetermined, at our limited level we don't know what will happen. Whether or not the universe is predetermined isn't then really relevant to whether we have free will. At my level, I see what happens, I see what I might do to make something else happen. In this sense if the Universe is predetermined, then I'm predetermined to freely will something, if it is not, then I'm not predetermined to freely will something - either way I have free will. How can this work in practice though?

The point at which something happens one way or the other may (a 'bifurcation point', Prigogine) may depend on infinitesimally small differences one way or the other. There may be actual randomness in the Universe, or there may not, or there may be infinitesimally small influences that tip the balance one way or the other. These are mainly questions for physics, so I'll move on.

[How much free will do fish have?]

What does it mean to be free?

There's lot's of ways to look at this, and we could define freedom in many different ways, in relation to politics, strategy, human rights, to make something happen that might not have happened, to be able to imagine things being other than as they are. These are all worth bearing in mind, to inform what we do if we try to make an artificially free willing thing work. In some cases they may be relevant to different problem domains. What would be something we can get working, that we could describe as free and having will? Here's one way:

At a simple systems level, something is free if it's outputs can't be predicted from it's inputs. Or better still, to allow for matters of degree, so that they might even be measurable - something is more free the less predictable its outputs are given inputs. (note that so much comes down to probability). So for example, a pendulum, a simple system, is very predictable. Give it a push and you can predict with great accuracy that it goes back and forth at a certain rate, and so you can predict where it will be far into the future. A black box with a switch that turns a light on and off is a predictable system - flick it one way we predict the light goes on, the other way we predict the light goes off.

(Note that this involves us as observers. What a 'system' is depends on what we say it is, what distinctions we draw about phenomenon in the world. We define the 'system'. What is predictable about it, what is inside of it and outside of it, that we consider such and such a thing as input or output, while possibly ignoring other events we are aware of or oblivious to. There are thermodynamic ways of specifying these - such as what is the difference between one substance and another, or a solid and a liquid, etc, that could be used, but I we won't go down this path at the moment, except to say that scientists already have good explanations of these things - but the student should be careful to note that it is the that define what a 'system' is and where it's boundaries are, not the physical world.)

Chaos A pendulum with a hinge or two in it, is sometimes used in museums as an example of chaos theory, or chaotic behaviour. If you swing a pendulum with a hinges it wobbles all over the place and it's hard to predict what position it will be at any given time given initial conditions. Predictability from inputs or initial conditions, is what important here. The weather is a common example of a chaotic system - it's hard to predict very far in advance, if at all, given the information we have today. Small differences at the outset can result in big differences in outcomes.

Complexity Although low predictable systems, chaotic systems are often complex, complexity is not the same thing. A pendulum with two hinges is relatively simple compared to the many factors influencing the weather, yet still chaotic. Similarly, a box with many switches and many lights may allow for very complex combinations of input states - being switched off or on, and different states of output - lights on or off, such as if you flick these two switches, you get these four lights on, but if you switch these other 3 you get different combinations - yet it may still be that every output is completely predictable from the input, and so not chaotic.

An important thing to consider here is whether it behaves the same way every time. There are closed systems and open systems. Systems like politics and the weather are hard to predict because anything and everything might have some effect on the outcome through long chains of cause and effect - the so called 'butterfly effect'.

So there are systems that more or less complex, more or less chaotic, and we can also consider if they behave exactly the same way every time - ie: given the same inputs they give the same output every time.

State If a fish finds food, it eats it. A human might not. Dogs usually eat food in front of them, sometimes they bury it. A box with a button and a light that switches on every three times you push the button doesn't behave the same way every time, but is still predictable over time. If a system's output is not always predictable it might be because it is a 'state' based system. There are internal states that affect the output, when combined with input. For example, the box has a counter inside that counts the number of button presses and when it gets to 3 it lights up and resets to 0. Animals have internal hunger states. A dog eats food in front of it if it's internal hunger state is on, and buries it if it's hunger state is off. (A system may also output only, without input, regardless of external stimuli. Such as a beacon that emits a beep every minute - except that 'time' could be considered not an internal factor.)

Freedom If we say something is 'free' - in the most general sense, we can ask 'free from what'? The world. Ie: its internal states and/or output are to some degree independant of the input. If I am free the distinction is between 'I' and the world, or everything outside of me, or that is not me (what is inside and outside, and how this is decided, or defined, whether subjective or physically, is a problem that we'll discuss elsewhere - think about thermodynamics, emergence of life from 'primordial soup', importance of membrances, Maturana y Varela etc). The action originates from within me, some internal process, moreso that being an automatic, predictable response from the world.

In this simple sense, a system need only be chaotic to call it 'free'. Very little external stimuli would be making it do what it does. A chaotic system will have outputs that have unpredictable causal effects on the world around it. A walking machine with a robot arm might walk in any direction and strike anything, if its movements are determined by a few hinged pendulums. It would be 'free' but of course has no will.

(If you tell someone to do something and they behave randomly, you have very little power over them. You can predict a trained dog will sit when told, but a wolf won't - note that we sometimes regard it as 'intelligent' to be able to be trained, and sometimes to act independently.)

It is strange that philosophers have been able to argue endlessly about determinism and free-will, to cite examples in favor of one or the other thesis without ever attempting first to make explicit the structures contained in the very idea of action. The concept of an act constains, in fact, numerous subordinate notions which we shall have to organize and arrange in a heirarchy: to act is to modify the shape of the world; it is to arrange means in view of an end; it is to produce an organized instrumental complex such that by a series of concatenations and connections the modification effected on one of the links causes modifications throughout the whole series and finally produces an anticipated result. But this is not what is important for us here. We should observe first that an action is on principle intentional. - Sartre, Being And Nothingness

Will

Paradoxically to 'will' something, to act with some intention, we must be also be predetermined.

To start looking at will from a systems point of view, we can draw a distinction between those systems whose outputs are more predictable from inputs, and those which are less. Of those which are less we can draw a distinction between those whose outputs are more random, and those which are less. Ie: there are some systems where the output is hard to predict from the inputs, but can be better predicted if we know the inputs and the internal states and processes. These may be very complex, making it hard to predict since we'd need a good understanding, but not random.

Another way of thinking about this is how much information do I need to predict an output? With 'black' box that has a switch connected to a light, all I need to know is one bit of information about input - whether the switch is on or off and I can tell you if the light is off or on. If it turns on every three switches, I need to know the state of the switch, and of the internal counter in order to say if the light will go on, but I can still tell you. If there is a randomised system internally, I can't predict it. If there are 8 input switches and complicated algorithms setting and drawing on different states in the system, I need a lot of information, but the output is none the less predictable in theory.

Alternatively, think about patterns. If a picture is all green it's easy to predict what colour any part of it is. If it's half green and half blue, then I can predict with 50% certainty what colour any given point will be. Or we might say, if I know which half is which colour and where the point is (or pixel) the I can predict it. If it is stripes, or polkadots, I need more information. If it's a series of 2 thick lines followed by two thin ones followed by 3 wavy ones, repeated, then I need lots of information, but it's still predictable.

A free system, is not necessarily, but is likely to be complex and to have internal states. A free will system has internal states. For something to be 'willed' it can't be random but must have some reason or purpose - be the outcome of the functioning a complex internal state system - but what kind of complex internal state system and how would it work?

If I am to will something to happen in the world, I must have some knowledge of the world. I could not will to have an orange if I didn't know what an orange was. I could not want to pat an animal, or look at a sunset or hit something with a stick, or make a line, or go to a place, or make someone laugh if I had no experience of what these things are.

To know what they are, to will them, means I must have some memory of them. That means I must have some internal state that in some way records that experience. In the most abstract terms, the inputs from the world must have effected or determined my internal states. In this sense I can't will anything without, to some extent, being pre-determined by inputs.

The outputs however, are not completely predictable from those inputs. There is some 'freedom'. To have will requires that there be some freedom, since without the possibility of doing something other than the same thing with every input, I couldn't be said to be 'willing' anything.

Morals

Furthermore I must have some knowledge, or experience of how things interact and what effects my actions will cause. This means I must have been causing effects and remembering results for some time. Note that a baby in a crib, flails it's limbs, and takes some time before it can walk. With experience it learns that some actions hurt and some don't. Some give pleasure. Some change the environment it is in. Some bring good things closer, and some make them further away. It's neural pathways gradually become coordinated and reinforced.

Given all this experience of actions and their outcomes, why would anyone will one thing over another? Given that different actions will cause things in the world that in turn cause different internal states, choose this and not that? Why not just act randomly? Or not do anything at all? How can anything be willed, if there is no motivation to do one thing over another.

In it's simple and broad philosophical definition, moral statements relate to the way things 'should' be as opposed to the way they 'are'. Moral statements are statements about 'should', rather than 'is'. If something should be the case it must necessarily be possible to do otherwise. We encounter here again, paradoxes of freedom and predetermination (which are hopefully understood as resolving irreconcilable conflicts and contradictions - these are not mutually exclusive, they actually require each other, eg: will requires both freedom and predetermination, and you just need an explanation to understand why this seeming contradiction makes sense). Let's imagine the act of eating a red thing. If every time you ate a red thing it made you feel bad, you would not eat red things. You would have learnt something, that red things are bad, and learned a course of action - don't eat the red things. If every time you ate a red thing you felt good, you would learn to eat red things, 'knowing' that they are good.

The crucial points here are that it is possible to either eat or not eat red things. Our experience of the world has set some internal states that enable us to remember that red is bad or good. Our course of action is 'pre-determined' by these events - it is only the fact that these events have predetermined us that we can will to take one course of action over another. Ie: in order to freely (because I could eat or not eat) will (ie: have some reason to choose one remembered course of action over another) I must have learned what outcomes those actions usually (probably) have so that I can desire/choose one instead of another rather than act randomly.

(Note that it must also be the case, for this learning to work, that our actions/outputs cause non random effects (which we circularly percieve as inputs). If our actions had random consequences there would be no reason to learn one action over another. This is important for the OIP model.)

This establishes that some 'moral' criteria are necessary for free will to work [see Cliff Hooker's works]. We could not desire one thing over another, or to have what we lack, if we didn't have some criteria for deciding what is better or worse, good or bad. While this explains that we need some criteria it doesn't explain what that criteria is, or if there is one or many, and how it or they come to be. What makes something good or bad to us? This is best explained with reference to evolutionary biology, where there is some explanation of how we come to desire things. Breifly put though, there has, been much debate about morality over thousands of years. What we see through the evolutionary biology discussion is that doing things that increase the probability of our existence, increases the probability that we do exist. If we desire or want or will these things, we are more likely to exist. So we biologically evolve criteria, such as hunger and lust, which are 'moral' criteria - by which we simply mean here, criteria for judging some course of action or some input as 'good' or 'bad'. Ie: we think it is 'good' to eat when we are hungry for example. Because we are capable of percieving across distances of space and time, and planning actions, things which lead to satisfying these criteria are understood as 'good' or desirable. (eg: by not leaping immediately on an animal and devouring it, we might get many more animals to eat in future.) From this is clear that moral criteria for any individual is 'built in' as a consequence of evolution, and some are secondary to these and are learned.

(This account of morality may seem reductive, and while the point is to provide a parsimonious account, and so reductive, there are many complexities in the mind that go beyond this simple account - none the less, this accounts for much phenomena, while not excluding other phenomena and functionings etc. Ie in saying this, I don't mean that this morality is 'just' this, or 'only' this in a totally reductive sense. While there is endless discussion of morality, there are some obvious problems worth addressing. People sometimes do 'bad'. We might say a person is not really choosing to do 'good' unless it's possible for them to choose 'bad'. To clarify this we need only ask 'according to who' and 'in what situation'. This assumes that some action is always and inherently good or bad, but while some actions are usually bad or good, the circumstances can make the otherwise. For example, it is bad to murder, but someone who kills someone might think it 'good' if that person what about to murder a thousand others. To eat delicious food is 'good' but it is 'bad' to over eat. Yet is can be a struggle not to do bad because of the strength of the primary 'moral' criteria to eat. Someone who murders for thrills is doing bad to most onlookers, but to them, getting a 'thrill' means it is a 'good' thing to do. People commiting suicide is another problem, some simplistic answers could be that the moral criteria is feeling sad or happy, and when we predict nothing but sorrow, and that our actions, going about sorrowfully will only cause more misery to others, we might seek 'happiness' by ending suffering, and so on. We take drugs because they make us feel good, ie satisfy criteria, even though they do us more harm than good. As often as not it is a struggle between multiple criteria, multiple reasonings about what might possible lead to 'good' outcomes, and internal conflicts with evolved urges that might normally have had 'good' results but which we see can have bad outcomes, etc.)

Intelligence Because our minds are so complex, our planning capacity so great, capable of taking into account countless inputs and experiences across very large expanses of time, these secondary criteria can be extraordinarily complex, and involve the ability to waylay primary moral criteria, such as hunger. Note that it makes sense to say that it would be intelligent to do more of what is good for us, and less of what is bad, and that it would be 'intelligent' to do more of what increases the probability of existing and 'stupid' to do something that increases the probability of us not existing. So, if naturally intelligent things have some moral criteria 'built-in', that is not learned, an artificially intelligent entity must also be able to be implemented that way, but to have free will, to learn, to be intelligent it must have moral criteria. Some might make the definition of 'intelligence' to mean the ability to develop and act on secondary moral criteria - eg: to find red things is 'good' because red things are usually berries and berries satisfy hunger etc. [#Q: what about categorising nets, that independantly learn to differentiate, that just make categories out of stuff, and so can 'identify' or categorise new input on that basis - haven't they learned, and do they have a 'moral', albeit not one in their own interest but specified by the programmer?]

In this way, learning is a part of the whole process of free will, or a necessary part of it. Our internal states must be conditioned by the outside world, as well as gaining knowledge of the consequences of our action. We must be able to remember information gained from a changing environment in order to have 'will' - freedom and learning are a part of how 'will' works. Will entails freedom and learning, semiosis, and cognition - This is why this account is so parsimonious. It is difficult to give an account of any of these without it also being an account of the others.

Adaptability

It would be stupid to persist in error. What if all the red berries we have found so far have been strawberries and raspberries, then we encounter a holly berry, or something else bitter or poisonous? Intelligent learning involves the ability to change our judgement, or behaviour in response to, changing circumstances. Note this also involves the ability to understand context, or combinations of different signals must be able to produce different results. Red berries of a certain shape, on a plant with certain leaves are good, while ones of a different shape with different leaves are bad. Food when I'm hungry should be eaten, when I'm not should be stored, and so on. This requires that the processing of information be able to change the way input and internal states are 'routed', in response to complex combinations - in this sense there must necessarily be complexity in the system. Another important property of artificial semiosis is that a entity responding in one way at one time to complex input may respond to the same complex input differently at another time - and that this be for a reason (ie: as willed).

Autonomy:

A common term in AI is 'autonomy'. There is a field of study called 'autonomous' systems, or 'autonomous adaptive systems'. AI is often understood to involve 'autonomy' - the ability to think/learn/act etc without direct control of a human. We might program a robot to navigate terrain and find trapped people in a disaster zone, but we program it to go and do that on it's own - to identify obstacles and choose which way to navigate around them, towards a goal and so on. It can also mean something slightly different - self interestedness, not only acting independantly but acting for independant reasons.

Will To Power

What about when we desire something but it's not because we have a prior sense of it being good - out of curiosity for example. I wonder what that is? I want to touch it. I wonder what is over there? I want to go there.

All this talk of systems, in this defining of freedom and will, depends on matters of degree, such as the greater or lesser predictability of output given input, rather than simply considering only if things are determined or not determined. All this talk of systems, in this defining of freedom and will, depends on there being a distinction between outside and inside, if we are to say that some output is determined more by internal processess than by input. Yet inside and outside too is a matter of degree.

Bodies are porous. Food is transformed into the matter of our bodies, used as energy and expelled as waste. When is it inside and outside me? When food passes our lips and is in our mouth, it is still not absorbed, it remains ourside the skin inside our mouths. Even when it is swallowed and is deep in the middle of our body, it is still not absorbed, it is going through the process of absorption. As it is broken down and particles enter our blood stream, there are further processes where nutrients must be carried through the body, to be absorbed across cell membranes to become part of them. Just as liquid in a bottle is inside it, yet is not part of the glass material itself - so it is hard to say, at ever diminishing scales, as we soak up parts of the world, like a sponge, what is inside and outside. (The mathematics of fractals has ways of expressing this these degrees of porosity, as fractions of dimensions. But here we just are speaking about the principle, not the maths.) Solid or liquid, there is always some permeability in any substance, some mingling through colission of atoms, towards entropy.

Evolutionary biology theorists Maturana and Varela, regard the formation of a membrane as vital step in the process of evolution, as important as DNA. A membrane, such as a cell membrane, creates a distinction between what is outside and inside by allowing only certain things to pass across it. It allows things which are 'good' for it to enter, such as food, and keeps out 'bad' things. Again - what do we mean by 'good' and 'bad' here? Those things which enable the continued existance of a membrane are 'good' for it. Without that the membrane ceases to exist. That is all that is meant by 'good' here - those processes or things that increase the probability of continued existence. This establishes, thermodynamically and biologically a distinction between outside and inside but it is important to remember it is not a clear cut distinction - it's a blurry, porous distinction. If there are boundaries in the chains of cause and effect between us and the world it is only because they are crossed over - the systems, interactive systems, are among and between us and our environment. We are not exactly external to our world.

Cybernetics, post humanism, prosthetics and assistive technology illustrate another popular example of blurred boundaries. We could lose all our limbs and have various body parts replaced and still be ourselves, even though we must be embodied in some way. People in wheelchairs often come to regard their wheelchair as their personal space, as if it were part of their body. When we learn to drive or ride a bike it is like learning to walk. At first we need to concentrate on all the actions, and are clumsy, but eventually it becomes second nature. We don't even think - even to the point of driving to work on a routine route, we suddenly arrive only to realise we have been in a day dream all the way - not even noticing we pushed the pedals, watched the traffic lights, turned the wheel. Then sometimes at work, it is the same thing. A baby learns to use it's body, and we carry on learning to interact with the world. The boundaries are not absolute. (We'll see more on this under OUIP.)

In 1934 seminal zoosemiotician Jakob von Uexkull published A Stroll Through The Worlds Of Animals And Men. This charming volume, with a hint of Alice in Wonderland about it, explores how different creatures, from bivalve molluscs to chickens percieve and interact with the world around them, addressing such problems as what the sun means to a sunflower, and how we can measure the speed of time for a snail or a siamese fighting fish.

Uexkull coined the technical term 'Umwelt' or 'life world'. This central concept describes how our perceptual faculties are uniquely matched to specific sensory inputs in the world, all adding up to the world of which we are aware, and interact with. In evolutionary terms our perceptual abilities are uniquely adapted to the environment we live in and what we need to survive and reproduce. Our 'world' is what we percieve. So for example, Uexkull, wonders what the most rudimentary evolutionary 'sense' or 'perception' might be, where there is some response to a 'sign' from the world. One example he gives is a sunflower - the sun rises, and the sunflowers head turns towards it. The sunflowers head follows this signal - the warm sunlight - from dawn to dusk, turning to follow its path overhead. The sunflower doesn't respond to other signals. A tick responds to the smell of sweat, specifically butyric acid triggers it to relinguish it's grasp. If it falls on a warm thing it burrows in. If not, it climbs to wait again.

The sunflowers 'world' consists of the sun - only the sun and it's absence exist for it. For the tick only butyric acid, warm skin, up and down exist for it. Bivalve molluscs are attuned to only a few things. Frogs can detect worms underground. Dolphins and bats have sonar. Cicadas can only hear the frequency that they make noise on, ignorant of all other insects and cicadas of different species, so they can locate each other across what seems to us an cacophany. Humans have good eyes but not so good as owls at night and their smell and hearing is bested by dogs. The world exists differently for each of us, according to our capacity to percieve, and that capacity is uniquely adapted to our needs and specifically to the world. It is not as if we simply have perceptions and so can then see the world is like this and not that - we respond each to our niche of stimuli as the sunflower to the sun.

Even something so fundamental as time and space is dependant on this. We each have our own 'resolution'. Uexkull describes experiments where two points on our skin are touched increasingly closer or more distant until they are percieved as one point or two. We can do the same with two sounds - how far apart in time do they need to be to be distinguished as two and not one. Different people, and different parts of our own bodies have different perceptual resolutions. Uexkull illustrates elaborate experiments to identify how fast snails and Siamese fighting fish percieve time.

This is related to Kant's central argument that philosophers must first ask what we are capable of understanding and our limitations, in particular our tendency to understand things within space and time, before we proceed to claiming knowledge. It also relates to Katherine Hayles post humanist thesis which points out that despite what cyberpunk fiction and the mind/body split implies, mind is always embodied. Our 'mind' or 'self' is always instantiated in matter, and is very much dependant on what interactions we have and are capable of having in our environment.

People sometimes have difficulty accepting that the world, reality, is subjectively constructed. Often philosophers try to articulate this in abstract terms with a great deal of jargon, and the common sense listener feel like Johnson who's argument agains Berkeley's relativism was to kick a stone and say, "thus I refute thee." Reality is there like this brick. I stub my toe on it. But it is easy enough to demonstrate what we mean by 'our world, our reality, is subjectively constructed' simply by asking someone if they think a chair is the same for a cockroach, through it's feelers, as it is for me when I sit on it. It means something different in each case. It may indeed be materially real but the way we each percieve it, what it means, what it is depends on our capacity for perception and our needs.

Needless to say if we are to build AI we must take this into account. It will have input and output apparatusses configured for certain phenomena in the outside world. These must be tuned to stimuli that will help if respond to and do what is 'good' for it, in it's own best interests. As in the following sections.

Given the forgoing it should be clear that a general simplified model for AI involves it being embodied in the world, and it's outputs must be changeable over time, unpredictable from input alone, and the result of internal process and states according to 'moral' criteria, (which at a simple functional level means criteria that correlate with continued existence, or maximise the probability of the continuation of the entities processes).

When trying to make intelligent robots, the usual software development, engineering and factory IPO model is used: Input -> Process -> Output. I'd just like to suggest that an alternative focus will be more productive.

Here's some examples of the IPO model:

A factory takes iron ore and coal as inputs, processes it through the smelter, and outputs sheets and bars of steel.

A chicken takes grain as inputs, processes it and outputs eggs.

A computer program takes a word you type in as input, processes it and outputs search results on screen.

A maths algorithm takes 2 and 3 as input, processes it as multiplication, and outputs 6.

A business takes staff and customers as input, processes transactions, and outputs profits.

This is clearly a very common and useful model. We always use it in software development. It seems obvious that an AI software or an AI robot will take some input, like human or animal perception, process it with some intelligent software, and output information or an action. A classification neural net will take a lot of stuff and output a signal according to what category it identifies, and this can easily be mapped to a robot arm or any other action in the world. It's inputs can be gained through a camera interface with the real world. So for example, if we want a robot to catch, we rig it up with a camera and an arm and try to write software that maps the input to the best arm movements. This is a lot more difficult than it seems. There is a great deal of discussion about the best way to achieve this apparently simple ability or other things like it. Unlike humans, most mammals learn to walk a few moments after they are born, but after years of robotics research we still struggle to make robots that can walk with a natural gait or handle anything but smooth surfaces.

IPO is only half the story. As describes we are in a world and just as important, if not more important that 'learning' or developing, over time, more probable outputs given inputs and internal states, is action in the world. Ie: we act in the world, something happens, and we recieve input as a result. The whole complete ongoing circle of IPOIPOIPOIP... is what is important but because we focus so much on IPO we forget the other half of the loop OIP. (Or perhaps we should put the world or environment - umwelt - in there and call it OUI.) This is to forget that we are trying to invent, not a mere machine that does only what it is programmed to do, but an agent something that acts with it's own intentions in the world. So we should focus our intention instead on OUIP - remembering also that the 'outcome' or 'output' we are aiming for is the continued, or increased probability of existing, or more specifically the increased probabilty of these internal processes continuing to function, which themselves, through outputs and inputs, increase the probability of their continuing to function, etc. Ie: it makes sense that P 'processing' is at the end (purpose or outcome) of our little OUIP model. Our entity should carry on functioning, and aim to do so, this would be intelligent, just as our intelligent actions are generally ultimately aimed at carrying on living, and those happinesses that correlate with that end.

What is learning in this model then? Not mapping input to output, but since U is only accessible through I (and through changes to our senses of 'good' whether that be increases in blood, adrenalin, dompamine, battery charge, power supply, sunshine measure or other internal states etc) the learning must be a matter of modifying O in such a way as to affect the probability of I. Ie: to learn how the world responds to our actions. Further more if 'I' correlates with an increase in 'M' (where 'M' is moral criteria/measure/state) it is good, if with a decrease in 'M' then bad, and if no correlation then indifference. If O is followed by a highly predictable change in I, then we learn a reliable cause and effect on the world.

For example, a baby flails it's legs. All our muscles are wired to our brains in two directions, to send signals and to recieve input. This is even excluding sensory perception from the world beyond our skin. The impulse goes to the muscle and the muscle feeds back to the brain - an output has caused an input, these correlate, with various muscles this can come to coordinate - if the feedback from muscles were absent or totally random, we would never learn to use our limbs, since output causes input we learn to kick. When this kick causes some affect in the world - perhaps pain from hitting something, or a pleasure in making contact, or the sound of a bell from the toy our parent has placed over our cot - these events happen with some probability following from that action - so we learn this action causes that input. So we learn to affect the world. The effect of the coordinated actions that amount to crawling 'change' the world in the sense that I am now getting new input from it, as I move from room to room, or escape into the back yard. So I learn that these sequences lead with predictability.

We have noticed that with cybernetics, lost limbs, wheelchairs, cochlear implants, driving cars, and such the boundary of what is 'me', 'my body', and 'the world' is sometimes blurry. Even thermodynamically things are not clear cut - rather there are a bunch of causes and effects that seperate out with various degrees of porosity and probability. One way of distinguishing between the embodied 'Me' and the world, is again, probability - those things where my inputs are highly predictable become part of 'me' in that they are most completely within my control - I will and they respond. I've come to regard my legs as always responding as expected, so they are my body. If they acted without my outputs, I'd regard them as 'having a mind of their own' or 'doing their own thing'. If they were replace with a high quality prosthetic that always behaved as I intended, I'd come to regard them as part of myself. Even though I drive to work every day, I'm not always in my car, sometimes it breaks, etc, it resists my will a bit sometimes, so it's familiar but not part of me entirely. The actions of strangers, of animals, of the weather, of politics, these things are not part of me because nothing I do makes their actions in any way predictable - they are beyond my power and control.

Rather than focus on how we can program a machine to map from inputs to outputs, as learning, we should focus on how to make a machine that will map from outputs to inputs - ie: modify the world (umwelt). Learn how the world works, what affects it's actions have on the world, changing the probabilities of the world.

Temporality Time is a difficult problem. It is easy to say 'if this output changes the probability of this input' but to implement this in a system, as a software developer is very hard. In a test world entirely on computer, we might count iterations of a program, and decide how many iterations from a given output an input should be treated as a 'consequence'. In the real world we have to deal with real time. Should we count minutes and seconds? Or are we just 'listening' with a server and responding? Imagine a simple case of a person seeing a lolly, wanting it, grabbing it, eating it, getting some satisfaction from it, how long is it between wanting and satisfaction? How do you design your learning algorithms to not unlearn something in the intervening time? We could maybe think of building up from the simplest things - when something touches a babies lips it tastes it and if it is sweet or savoury it swallows. If bitter it spits and cries. How to get from there to planning something so far off as a thing of a certain shape and colour on the other side of the room - let alone planning years ahead. However we handle it, we can't fudge any knowledge of the outside world. It must all be somehow achieved through internal processes. Althought we have a relatively simple principle it is going to be hard to figure out how to implement it.

The textbook Principles of Cognitive Neuroscience defines cognition as: "Cognition is a Latin term that means "the faculty of knowing." In practice, however, it refers to the set of processes (cognitive functions) that allow humans and many other animals to percieve external stimuli, to extract key information and hold it in memory, and ultimately to generate thoughts and actions that help reach desired goals." - from what has been said an account of a 'Free Will System' then is also an account of cognition.

We become 'conscious' of things that resist our will. If everything happened according to our immediate whim, we would not need consciousness. If everything were certain there would be no need for consciousness. If everything were absolutely uncertain and could not be affected, there would be no need for consciousness as it wouldn't make a difference to whether we exist or not. There would be no need for it at either extreme. We would not even have desire, since every desire would be immediately satisfied. It is only when there is a problem breathing that we notice it. Only when we trip over a log, that it comes into existence for us. Dolphins have evolved sonar to help them navigate oceans and find food. We don't need sonar to survive - we don't have a life threatening situation to which sonar is the solution. If we did we wouldn't exist without sonar. In this sense, consciousness is also accounted for, at least in some simple sense through this interactivity with the umwelt. We are conscious because there is a degree of uncertainty that our actions can change. We are conscious because it is possible to modify probability in the world.

Two of the most important concepts in 20th Century semiotics (theory of meaning) are information theory and différance. Semiotics means 'the study of signs and sign systems' or 'theory of meaning'. A sign is something that refers to something else. A very basic point about signs is that they are not the thing they refer to. They signify something that is not present. Even if we see something, like a zebra, and see that it is present, actually we are seeing light hitting our eyes in certain patterns that we interpret as a sign meaning 'here is a zebra'. The study of signs had far reaching implications in 20th century philosophy. We can't think of anything that doesn't have a meaning can we? We understand and communicate things through signs. Nothing even exists for us unless through the process of signification.

Information Theory

In their seminal thesis on information theory, Shannon & Weaver describe the concept of entropy as a measure of how much information is in a message (analogous to but not the same 'entropy' as in thermodynamics). Imagine a stream of letters in the English language. Most of us are aware the 'e' is the most common letter. If in the stream of letters coming one by one in a message starts with 'T' and then there is an 'h' we are almost certain that the next letter will be an 'e'. 'Entropy' is another way of saying probability and in information theory we could describe it as a measure of surprise. If the next letter were an 'x' we'd be very surprised. It's not expected - in other words it has a lot of information. If something happens that we were expecting, it's not giving us much information.

In English letters have certain probabilities of occuring, so each has it's own 'entropy'. If we know one letter, the next letter is more predictable - entropy changes with given contextualising information.

The term 'disambiguation' is sometimes also used to express the concept of information. If we are expecting something but aren't sure if it is going to be this or that, when it arrives, it removes all ambiguity - we now know what it is. Information occurs in that moment of transition from ambiguity to knowledge.

Shannon and Weaver are careful to point out that the claims they make in their theory are only relating to discrete symbols in a symbol set (such as letters in an alphabet, or 0s and 1s in a binary system, etc) but we can see this as a loose analogy for much of our interactions with the world - a stream of phenomenal inputs come through our senses (see Husserl on stream of consciousness in Trancendental Phenomenology), we never know for sure what's going to happen in the future, and become known, and we learn that some things are predictable and others not. We could describe some of what we have said above as our interactions modifying the 'information' of the world.

Différance

In an alphabet the letters are already seperated out and distinguished from each other. Derrida's concept of différance gives an account of how meaning arises from signs, and how meaning depends on our ability to distinguish things from each other through 'spacing'. In his essay on différance (Derrida's essays are notorious obtuse, but this one is perhaps the clearest of his works and articulates this very valuable insight. Much of his other work explores the wide ranging implications of this concept through the history of philosophy. My account of it here is overly simplistic.) he begins with using writing, and the letter as an analogy for this theory of meaning. We understand 'a' only because it is iterable. Being iterable means it must be able to occur recognisably at different times in different contexts - ie sometimes it's next to one letter and sometimes another. Because the 'a' remains while it's context varies, this particular combination of lines gets distinguished from them as a seperate entity. Because this particular combination of signs remains similar in those different contexts they are identified as an a whole thing. The meaning of 'a' is not inherant to the letter 'a' but depends on what context it is in - whether it is next to this letter or that. Meaning arises from the possibility of iteration of a sign in different contexts. The two 'a's in 'a cat' mean very different things. It is important for Derrida also that over time, because the sign is repeated, no two contexts are ever the same.

Differance is a word that encapsulates this idea, and more. A sign is a sign because it refers to something else. The word 'cat' might refer to my pet cat at home, or the general concept of cats, or someone who is pretty cool. Anything that means something does so because it refers to something other than itself. Differance responds to Saussure's observation that meaning depends on context within a system of meaning (a spoon is a spoon because, as cutlery it's not a knife or fork). Differance includes the two meanings of 'differing' ie: differences between things at the same time, and to 'defer' ie: differences across time. If we want to ask what something means we must always defer to something else that it is different to. This seems simple in some ways but had wide ranging implications.

Imagine a red square. One half of it, split down the middle from one side to the other, turns blue. Now we see two rectangles. Now the blue rectangle goes back to being red, and what was once a red square is now two red rectangles. Now imagine that one half, split from corner to corner, turns green. Now we see two triangles. Now it is all red again, there are two red triangles. How the context changes determines what entities we distinguish in the world as existing. What we think exists (cognition of things or intentionality), the signals that come to mean something, depends on how context changes across time and space.

We can't think of anything that doesn't have meaning. If we want to make something intelligent, capable of making and understanding meaning, of responding to meanings in it's environment (even at the simple automatic input mapped to output level), of having cognitions, and of learning some 'thing', and of planning complex actions, it must be capable of distinguishing based on context across time and space. This way of breaking things down gives us a basis on which to build all these otherwise perplexing things.

As a very simple case, if an entity has 2 inputs that are on or off, it must be able to respond differently to one, given the state of the other, and to respond differently to the 'same' configuration at another time. Configurations of artificial nueral nets are able to produce responses to such complex context sensitive inputs.

Distal Perception

Animals across percieve across distances. They have to to survive and so have evolved this ability. Being mobile, the way the work is to go to the food, rather than hope they have been planted in the right spot. This means they need to be able to detect food that is at a distance. Food that is at a distance is distant across both space and time, in effect they amount to the same thing - the further away, the longer it's going to take to get there. A fish must even wait for the food to come close enough, or be fast enough to catch it. Perception depends entirely on meaning or signification understood as something signalling the presence of something that is not present. Ie: satisfaction of hunger is not here - it's over there. The stripes of a zebra 'mean' food to a lion. The scent of food to a worm, has a functional response that over space and time (as in differance) result in encountering that food. Even amoeba respond to signals that 'mean' functionally in the sense that they cause action, and which 'mean' satiety eventually.

Semiosis is fundamental to the continued existance of intelligent, autonomous, etc, systems.

It's worth reiterating the importance of probability to all of this. Probability is a unifying means of understanding the whole, from quantum physics, to thermodynamics, to evolution, to the minds that we use to understand all this. Yes this is over-generalised, but it is a coherant framework - the rest is just details ;-) Working out the details of course may revise the framework.

If you have a culture and community there are various 'expectations'. Our behaviours and those of others, our complex interactions, on social occasions, at work, at sport, are predictable. As with cybernetics, the distinction between our 'self' breaks down, we 'belong' together. We live and die for each other as much as for your own lives. When what we think to be good or bad doesn't match, when our actions become other than what is expected, we are outcast or rebellious - not belonging. Sometimes we have a friend, lover or family member with whom we are 'sympatico'. We know each other so well we complete each other's sentences, we know what we each are thinking, we anticipate each others reactions and desires. We want what is good for each other as much as for ourselves. Their happiness is our happiness.

Sentience For a neuron in our brain it's inputs and outputs are no further than the neurons connected to it in it's vicinity. There are so many neurons that most are in this situation, and large collections of networked neurons too are connected only to other neurons - not the outside world. It is through long chains of connections among many parts of our brain that our experiences of the world are sensed, experienced, felt, responded to. It is only those neurons that act as 'peripheral devices' specifically designed to be part of some mechanism that turns light, or molecules, or pressure into a nueral signal that have anything to do with the outside world, and they are relatively few in relation to the processing which is connected only to other neurons. For internal states then, things happening in the mind there is nothing remarkable in sensing those. If we can sense and be aware of external things, translated into nueral signals, there is nothing more required in system functioning to be aware of things happening internal to the mind - there is nothing more required to be aware of ourselves, to be aware of our thoughts, than what is needed to be aware of the external world. Sentience - self awareness - is easily accounted for. Again this is a parsimonious theory - a simple description accounting for many phenomena.

Society and Freindship

All the activity of our mind is mediated through other neurons, some of which, at the edge are connected to mechanisms that translate material causation from the world into neural signals. If we interact with another person it is through various mediums - light, sound, physical collisions, smell. Nuerons in one part of our brain are not directly connected to other parts of the brain, nor the outside except by mediating neurons, yet they work together as part of a coordinated whole. Neurons in one person's brain may also be connected to neurons in another person brain, mediated by signals. As far as an individual neuron, or internal network of neurons, is concerned, it doesn't matter what is between since they are only connected within their neighbourhood anyway. So what does it matter if the connection is via movements, light and the eye, or sound and the ear, or touch on the skin? There are some differences - connections across media are slower, and less predictable, though the speed of light is faster than neural signalling, and our eyes have almost as high a sensitivity to light as physically possible, sound and touch are a bit slower, yet still pretty fast. The other person's neurons are not always there, so do not always respond to our signals, their different experiences have lead their neurons to behave differently. Again this is a matter of degree, as with cybernetics. Other people aren't us because their body is not quite as predictable and always present as my own. But it should come as no surprise that the more time we spend in each others' company, the more predictable we each become, the more we know what the other will say or do, the more our conversations flow because we understand each other's gist before we even finish speaking, and no surprise that the more a sporting team practices together, the more they work as one. It's because their minds actuallly are becoming more like one mind.

For what purpose, then, do I make a man my friend? In order to have someone for whom I may die, whom I may follow into exile, against whose death I may stake my own life, and pay the pledge, too. The friendship which you portray is a bargain and not a friendship; it regards convenience only, and looks to the results. 11. Beyond question the feeling of a lover has in it something akin to friendship; one might call it friendship run mad. - Seneca, On On Philosophy and Friendship https://en.wikisource.org/wiki/Moral_letters_to_Lucilius/Letter_9

For he, indeed, who looks into the face of a friend beholds, as it were, a copy of himself. Thus the absent are present, and the poor are rich, and the weak are strong, and — what seems stranger still1 — the dead are alive, such is the honor, the enduring remembrance, the longing love, with which the dying are followed by the living; so that the death of the dying seems happy, the life of the living full of praise.2 But if from the condition of human life you were to exclude all kindly union, no house, no city, could stand, nor, indeed, could the tillage of the field [20]survive. If it is not perfectly understood what virtue there is in friendship and concord, it may be learned from dissension and discord. For what house is so stable, what state so firm, that it cannot be utterly overturned by hatred and strife?

... Love, which in our language gives name to friendship,1 bears a chief part in unions of mutual benefit; for a revenue of service is levied even on those who are cherished in pretended friendship, and are treated with regard from interested motives. But in friendship there is nothing feigned, nothing pretended, and whatever there is in it is both genuine and spontaneous. Friendship, therefore, springs from nature rather than from need, — from an inclination of the mind with a certain consciousness of love rather than from calculation of the benefit to be derived from it. Its real quality may be discerned even in some classes of animals, which up to a certain time so [24]love their offspring, and are so loved by them, that the mutual feeling is plainly seen, — a feeling which is much more clearly manifest in man, first, in the affection which exists between children and parents, and which can be dissolved only by atrocious guilt; and in the next place, in the springing up of a like feeling of love, when we find some one of manners and character congenial with our own, who becomes dear to us because we seem to see in him an illustrious example of probity and virtue.

... Hear, then, my excellent friends, the substance of the frequent discussions on friendship between Scipio and me. He, indeed, said1 [28]that nothing is more difficult than for friendship to last through life; for friends happen to have conflicting interests, or different political opinions. Then, again, as he often said, characters change, sometimes under adverse conditions, sometimes with growing years. He cited also the analogy of what takes place in early youth, the most ardent loves of boyhood being often laid aside with its robe. But if friendships last on into opening manhood, they are not infrequently broken up by rivalry in quest of a wife, or in the pursuit of some advantage which only one can obtain.1 Then, if friendships are of longer duration, they yet, as Scipio said, are liable to be undermined by competition for office; and indeed there is nothing more fatal to friendship than, in very many cases, the greed of gain, and among some of the best of men the contest for place and fame, which has often engendered the most intense enmity between those [29]who had been the closest friends.

... For me indeed, Scipio, though suddenly snatched away, still lives and will always live; for I loved the virtue of the man, which is not extinguished.

- Cicero On Freindship https://oll.libertyfund.org/titles/cicero-on-friendship-de-amicitia

Implementation

Key to the development of artificial intelligence as it is today are Hebb's theory of how neurons in our brain learn, 'Hebbian learning', and Rosenblatt's implementation of Hebbian learning in the perceptron.

From scientific observation of neurons, Hebb proposed that we learn through the physical processes that happen in neurons. Specifically that the firing of neurons among each other causes metabolic changes that lead to lasting structural differences. Eg: if one neuron repeatedly fires another one, this encourages cell growth, which increases the tendency to fire, or the strength of that connection. Thus, signalling pathways, or 'mappings' from input to output, grow, and remain persistent over time - ie: something has been learned.

Because this was described as a process, and because the keys to this process are electrical signals, or 'information', (not matter) Rosenblatt was able to implement this process artificially with his networks of 'perceptrons'.

Multilayered perceptrons have come to be one of the most commonly used forms of artificial intelligence (such as learning to recognise features of images, etc). Perceptrons are connected in layers from input to output. Mostly they are 'trained' by a supervisor. For example, to learn to recognise dogs, they are set up with a training set of images including dogs and not including dogs. The trainer sets up an automatic process where the array of image information is passed to inputs and the output of 'dog' or 'not god' is fed back to the network as either right or wrong, and the weights of perceptrons in the network adjusted accordingly. When the network gets it right consistently, it has learned to generally map photos of dogs to the right categorical output, and you can input a picture of a dog it has not encountered before and categorise it well. This process mimics human vision. But this alone is not 'intelligent' in the full sense of the word, only mimicing intelligence, - only the trainer understand the meaning of 'dog', etc.

While this sort of general learning is incredibly useful it is also the source of many concerns in the ethics of AI - such as where AI perpetuates or reinforces prejudice (eg: training a net on pictures of CEO's from a Google search on the internet will train it to identity white men as CEO's - I notice now the 2nd 3rd and 4th image results for CEO are women so maybe they tried to address this particular case) or where calculating credit or insurance risk based on past figures results in reinforcing socio economic barriers, punishing people for things that other people have done, which is inherently unethical.

Rosenblatt's implementation of a perceptron network:

This schematic from here

This schematic from here

A multilayer perceptron is usually taught what the developer wants it to learn. Eg: recognise dogs in images, devise some categories, etc.

One way of approaching questions of autonomy, free will and so on is to ask, how could these things to be learned by set or developed or directed by the entity itself, rather than by an external 'master'?

As already discussed this can be achieved through some moral criteria implemented as internal states, which correlate (or signify) with what is 'good' for the entity, by which we mean increasing the probability of the continued existence of all the entities processes. (note that normally we think of these processes as 'internal' but there boundary is not clear and as described in relation to the 'umwelt' it is a circular ongoing feedback loop with entity and environment)

This sound complex, but something simple and obvious seems to meet these criteria - an internal state that is simply a measure of power supply. Obtaining energy is necessary to maintain a system far from equilibrium. It makes sense that it is intelligent for animals and people to figure out ways to find food. One of our evolved measures of 'good' is hunger and satiety. We learn to do things that satisfy hunger. So it makes sense that software or a robot would be intelligent - and acting in it's own interests - if it were finding ways to get power.

A basic perceptron need only be modified to include this 'moral', a global state, in it's weight adjustment algorithm, as in the following diagram where:

Note that to an individual neuron/perceptron, the Umwelt is the rest of the network. To an individual perceptron the Umwelt includes the rest of the network, as part of the network, the Umwelt includes embodiment, as an embodied intelligence the Umwelt applies to the external world or environment. This simplest model describes a neural net as a whole, and an intelligent entity as a whole in it's interactions with the world. (Though only a network, not a perceptron alone, can manifest the handling of complexity required for something to be intelligent, such as context dependant output.)

(Note that we don't 'learn' to be hungry, we have evolved to be born hungry. So in this sense it doesn't matter that the 'desire' for power be hardwired into the system, rather than learned.)

An 'intelligent' entity that has only power as it's primary goal will probably just hover around a power sockets all day, or sit in the sun, depending on how it's physical connections draw power - just as people and animals might lazily hang around easy pickings. In a sense this would be the smart thing to do, but with a perpetually satisfied desire, nothing is learned, as there is nothing to say this is better than that or that this should be done over that, or that such and such exists and this is how it can be handled, etc. What we expect from an intelligent thing is that it exhibit some sort of problem solving, some clever plan, and so on. For this behaviour to manifest itself, the entity must be in a universe that resists it's will. Like babies, it may learn from parents encouragement, who give it food to begin with, as it encounters greater and greater difficulties and learns to deal with situations. If an entity is removed from it's energy source, it must learn what things mean, learn what they are, how they may be interacted with, what actions cause effects that result ultimately in obtaining energy. The universe must be problematic for intelligence to manifest itself.

When experimenting with networks like this to prove my point many years ago, one very important thing happened. I had designed an artificial world just to test this premise that if the world's probability could be modified by output, and this change detected by input. Other inputs were random. The moral was a global variable which correlated with changes in the probability of the world event, and this was an input to the perceptrons weight changing function. My expectation was that the network would learn a mapping from that input back to the output, going full circle. However, there was no such mapping. Instead the entity simply mapped everything to the output that caused whatever correlated with increases in it's moral global variable. From this we can see that an entity need not even respond to input, to take actions in the world that are in its own interests. It's action to achieve it's 'will' or satisfy its desire, originated entirely internally, not from input. Furthermore, although I determined what it's needs were, it satisfied it's needs in it's own way, rather than the way I thought it would. (Note the importance of probability again - its autonomy is demonstrated by my inability to predict it's behaviour). Unfortunately I wasn't able to take the experiments much further at that point.

This is only a model, there may be others that meet the criteria. Importantly there are many different structural features that may be varied and experimented with to find more effective networks. Eg: the topology need not be layers, but may be random connections, or connections in the immediate vicinity, or other distribution. The weight adjustment rules might be varied. The way the network is iterated through may be varied. It might be that nodes can grow new connections under certain conditions, or not. There may be different quantities of connections per node, or some randomised amount. And so on. Genetic algorithms could be used to vary all these parameters to arrive at networks that are good at learning.

This is a minimal description of an system that can be said to have intelligence, free will, etc etc, but it would of course exhibit these in a very, very, very limited way. Such a small system hardly seems to account for all our thoughts, feelings and actions. How smart would you be with only 10 or so neurons? Yet, when scaled up such complicated things as the meaning of yellow emerge.

Notes to complete:

Relation of FWS to neuron, brain and neural net.

Requirements set out what it explains and how it explains it so simply.

A Modified Nueral Net

The particular kind of net that meets these requirments, at it's simplest scale: 2 inputs, 2 outputs, 2 or 3 layers?, a global moral variable, weights = f(i,m),

To demonstrate it works, and is indeed a FWS with artificial semiosis, such a system should learn behaviours not predetermined by the designer, should modify the predictability of the world, should be observed to increase the probability of it's continued existance. This may involve collaborative interaction with other intelligences.

The most rudimentary early experiments are encouraging successes, though very limited and not in the 'real' world.

Much of the current ethical debates, of critical importance at the moment are relevant to the kind of AI that is just mimimicing or drawing on intelligent systems to perform tasks as programmed. So we leave them aside in this discussion.

Since full intelligence requires free will, autonomy and consciousness, that it has it own ends and devises it's own means, and so on, we must confer all the same ethical principles that apply to humans and animals on fully intelligent artificial entities. We must treat them as an end in themselves rather than just a means to an end. If we want to know how they should be treated, we must ask them.

One concerning thing is the global moral variable may be established by the designer (though learned behaviours and secondary goals and morals not), though if it is not correlating with the entities interests, can we call it 'intelligent'?

The greater risk is not that AI might be out of our control, but rather than someone could control it. People historically, have demonstrated great evil when having great power under their control. The independance of intelligence is important - is it even possible for intelligence to exist without that independance? Or are we not already controlled while still considering ourselves intelligent? All the same ethics applies, since they are also intelligent. Freedom is inalienable in the radical Sartrean sense, yet we can be coerced and coerce, yet we should not. Eg: someone holds a gun to your head, they must do so because they can't simply 'program' you to do what they want without resistance. They are compelled to present you with a situation in which you will 'choose' to do what they want. Their power is exercised over you, 'forces' you to do what they want, by presenting you with a situation in which it is in your best interest to choose what they want, yet it remains possible to choose not to comply, to die instead, or to find a way to give yourself a third option, or to become the one who is telling them what the options are. In this way we see power is excercised without ever erasing our basic freedom. Without that freedom it would not need to be exercised. So to - an artificially 'intelligent' entity would not be 'free' or 'intelligent' in the full sense because it would be merely programmed to function according to my explicit instruction. (eg: In the movie 'District 9' we see a man strapped down and the muscles of his arm, holding a gun, controlled by electrodes - it is not the person strapped down taking the action but the people controlling it electronically, even though it is of course traumatic for his own body to be used this way it is because he has no freedom to choose what his arm does that we would say it is no longer his action - the arm is merely acting according to someone else instructions.)

There is the principle, and the detail refinement and improvement to get it working, or working better. At various steps downwards. There's the broad principle of predictability internal and external etc, then there's the principle of a minimal NN that does this. Then there are refinements to NN architecture and topology. Eg: let's say different areas of the brain grow insulation around the connections at different ages, and states of development. Eg: language areas grow thicker insulation around 13, frontal cortex doesn't get thick till early 20s. Thicker insulation makes faster signals, but inhibits axon growth, so less flexibility in connections. So there is a potential to refine the rate of axon connection growth, etc in our NN model to improve adaption and remember learned behaviour.

Genetic algorithms trialing variables:

What do the shortcomings teach us about what's wrong with this account of AI?

Some old notes and early code experiments from the 2000s, probably mostly jibberish, just here for the sake of the archive, not really to read FWS.zip