Can You Know The Future? Scientific Evidence Confirms You Can!

Logical people and the scientific community have been scoffing at Psychics, Astrologers, Tarot readers, Palm readers and all other types of so called future seers, for quite some time now. As far as we know time only flows in one direction.

It would be awesome however if we were able to know the future before it happened. Imagine the possibilities and implications. Winning the lottery and investing in profiting shares would be top of the list. And what if we could know when and how we were to die (perhaps even glean beyond our death!?). We would be able to know so many of our itching curiosities like whether we are able to settle beyond earth, whether we do make contact with extraterrestrials, whether we make artificial consciousness etc. The possibilities are endless and wide reaching.

Alas, it seems to be scientifically impossible to do so. There is some possibility of traveling to the future (using high gravity or near light speed travel) but not of knowing it in the present before it happens.

But is it really Scientifically impossible to know the future?

Although that seems to be the prevailing paradigm it does not appear to be entirely justified. Much of the reason why the alternative isn’t a widely accepted concept is due to a lack interest and support from scientific ‘authorities’. Sticking to authority over evidence is not in the spirit of science but it is something we are prone to and must resist.

The evidence in favor of prophecy, precognition, and premonition:

There are decades worth of experimental pieces of evidence to support that we have an ability to know the future. The effects are not very large but they are statistically and scientifically significant nonetheless.

Dean Radin is a researcher in the field of parapsychology and a minor celebrity because of his many international talks on Psi Phenomenon. He has compiled a good list of evidence on his website.
Emeritus Professor of Psychology Daryl J. Bem is another researcher who has published articles, which are available on his website. In 2011 his research paper ‘Feeling the Future’ was published in the Journal of Personality and Social Psychology. His latest research (2014) a meta-analysis of the same phenomenon appears still to be under editorial review.

I have shared a few of these research articles below (not an exhaustive list):

  • Honorton & Ferrari (1989). “Future telling”: A meta-analysis of forced-choice precognition experiments, 1935-1987 [pdf – source]

In this meta-analysis, the researchers found a significant result after compiling trials from 309 studies, collectively involving 50,000 individuals. The number of trials amounted to 2 million. The purpose was to see if individuals could significantly predict the identity of a stimulus that was going to be presented to them, a few hundred milliseconds up to a year in the future.

This experiment (similar to many others) confirmed that we become significantly aware of a stimulus before it has been presented up to a few seconds into the future.

In this study similar to the above one a visual stimulus was randomly shown to an observer and their electrodermal response measured. The twist is that the response was measured prior to seeing the stimulus. Surprisingly the results were similar to as if the observer was already seeing it! Other possible explanations such as expectation, sensory cues, and other artifacts have been ruled out.

Another study which in 2 parts contributes some evidence and replicates other and earlier studies that the body (here specifically focusing on the heart) is able to perceive information from the future. They find that the heart gets involved before the brain in processing this information. The study also interestingly claims that females are more intuitively attuned to perceive the future compared to males.

  • Radin & Borges (2009). Intuition through time: What does the seer see? The journal of Science and Healing. July-Aug 2009, Volume 5, Issue 4, Pages: 200-211

This study is very interesting because it uses eye data prior to and during visual stimulus of different emotionality and valence. It confirms previous findings of a significant response. But it adds more depth to the findings in that it appears that the type of autonomic response significantly correlates with the emotionality and valence of the future image as well. So it appears not only that the body knows before hand that an emotionally charged image is going to be shown but also the amount and direction of this emotionality. Here again in the study, it appears females are better at perceiving the future.

  • Bem, D. J. (2011). Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.

Yet another replication of results of presentiment studies. Here they compare results of guessing erotic images versus non-erotic images and as expected the guessing rate for erotic images was significantly more than expected by chance. The guessing rate for non-erotic images, on the other hand, was similar to what you would expect by chance.

Here Dean Radin presents a thorough analysis of 75 years worth of scientific evidence demonstrating with high levels of significance the observation that we can and do know the future.

  • Tressoldi et al (2011). Let your eyes predict: Prediction accuracy of pupillary responses to random alerting and neutral sounds

This study is different in that they compare eye response to future auditory stimuli. Again the results are statistically significant.

The results are again confirmed. Interestingly in this study the authors found that higher quality studies revealed a greater effect size compared to lower quality ones.

  • Bem et al (2015). Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events

Yet another meta-analysis which pooled results from 90 studies across 33 laboratories and 14 countries. Again yielding highly significant results in favor of precognition.

There are negative studies as well:

As with most scientific results, there are negative studies as well. Researchers are not insensitive to these. One such study (which I found from Dean Radin’s website) is the following one:

  • Galek et al (2012).  Correcting the Past: Failures to replicate psi [pdf – source]

Here the researchers were unable to replicate the results of Bem et al (2011) – Feeling the future. However, (as the researchers discuss) the replications focused on only 2 of the 9 experiments conducted by Bem and only the paradigm of retroactive recall. Based on this they suspect those experiments had a Type 1 error but they do not have any evidence to reject the other 7 experiments.


I was pointed towards this study which is another negative study similar to the one above focusing on retroactive recall.

How come no one is talking about it?

cause and effectThese are extraordinary results. Even more so than questioning whether we need the brain for consciousness (as I discussed in a previous post). The weight of evidence seems to be there to support this claim as well (at least in my humble opinion). So why isn’t it shaking up our world view?

Being suspicious I suspect that we are trying to cling on to our old ideologies. Time is one directional and so is cause and effect. To have an effect, before its cause, is a complete inversion of our foundational assumptions.

Take an example:

A sound wave that had not yet been produced, arrived into our perception, was processed and produced an autonomic effect that we were able to measure. And then it was produced!

Such notions would dramatically change how we conceive what a sound wave is, indeed how we conceive everything.

What are (or could be) the implications?

Firstly it makes the study of consciousness and perception even more mysterious than it already is. It gives a hard blow to our current direction of studying these notions. In neuroscience, we generally think that sensory data comes in, it is processed and that leads to its perception. But it seems we are already perceiving (to an extent) before the sensory data ever had the chance to get to us let alone to be processed. This seems to show our field of perception probably extends into the future.

On the note of developing artificial consciousness, we are then also faced with the huge problem of being able to ‘program’ machines whose perception also extends into the future.

Secondly, as the results show some individuals are better at feeling the future than others (e.g. females) one has to wonder if some exceptional individuals would be extreme outliers and be able to perceive the future with a greater ability hence being truly psychic (this is obviously speculative).

Thirdly could this effect be exploited and exaggerated to become more useful to us? That would be very exciting indeed.


So what do you think?

Do we even need our brains ? – Some Scientists aren’t so sure

Once during a discussion, a respectable someone (who I won’t name) suggested to me:

“I believe the seat of the self is the heart. Not the brain!. I laugh at these scientific notions stating consciousness is in the brain. I believe in the future scientists will also accept the true seat of the self is in the heart..”

He went as far as suggesting we ‘think’ with the heart as well. Out of respect, I refrained from saying anything in person, but inside I was thinking “that’s ridiculous!” This person is very invested in eastern Yogic, Sufi, and Buddhist ideas. Which gave me some perspective on his views, but all the same (being a man of science myself), I rejected them entirely. Though I have high respect for such (Buddhist) ideas myself when they lead to rejecting empirical evidence they fail for me greatly.

This person then suggested that there was evidence to support his claim. He referenced to stories of normal people who on brain scans were found to have no brain in the skull, only water! Again I was thinking that’s preposterous. Probably some anecdotal stories which get exaggerated as Chinese whispers.

I was greatly surprised and humbled when I found out that those stories were actually with some merit!

In Dec 1980, Roger Lewin published an article in the Journal ‘Science’, titled: “Is Your Brain Really Necessary?” [1]

This article was based on case studies (hundreds of them) done by British Neurologist, Professor John Lorber on patients with hydrocephalus. Particularly focusing on one case. The case of a university student who had an IQ of 126 and had gained a first-class honors degree in mathematics, and was socially completely normal. But on doing a brain scan they found he for all practical purposes did not have a Brain!

The anatomy of the brain - containing its mechanism and physiology, together with some new discoveries and corrections of ancient and modern authors upon that subject - to which is annex’d a (14784527925)
Inside the skull, with no brain tissue

Instead of the normal 4.5 cm cortical thickness, he only had about a 1mm thin layer. The rest was just CSF (Cerebro Spinal Fluid) which is practically water. This had likely happened due to a slow displacement of the cortex outwards (and against the skull) because of increasing pressure and quantity of the CSF fluid. This meant that the deeper, more primitive, structures were relatively more intact (although still under pressure to shrink and likely not normal either).

This case was well documented and has been greatly debated, including in the original article. The leading explanation seems to be of neuro-adaptation (which has also been called a cop-out). Nevertheless, it still remains difficult to explain away such observations. As Emeritus Professor William Reville states [2]:

“I certainly cannot explain Lorber’s observations, except to note that in some cases the brain shows itself to be amazingly adaptable and capable of servicing the body in a manner equivalent to the familiar “normal” brain, even though its volume and structure is remarkably compressed and distorted.”

Lorber’s other interesting observation was that this isn’t a unique finding either. In fact, in his studies, 50% of people with more the 95% of the cranium filled with CSF still have an IQ greater than 100![1]

Now I don’t know why this hasn’t shaken the field of neurology as it should have. I, on the other hand, find it earth shattering.

Perhaps neuroadaptation can explain some take over of functions. But surely having a 50 to 150 grams brain (and only a millimeter thick cortex) compared to the normal 1.5 Kg brain should have huge impacts on cognition. Our neurological theories usually associate the cortex with specialized areas of processing e.g. sensory cortex, motor cortex, auditory cortex, visual cortex etc. Some other functions include abstractions, calculation, sequencing, memory etc.

Blausen 0102 Brain Motor&SensoryApparently, it seems all these specialized areas get compressed and mushed into a millimeter thick layer and still function properly. Although there is no mention of the morphological changes in this article. Other and more recent research suggests there is extensive axonal, cytoskeletal and synaptic damage with hydrocephalus. There is relatively less neuronal death. However secondary changes are observed in neurons as well. [3][4][5]

There is such great damage to the communication apparatus yet still normal levels of intelligence and cognition! Does this not beg some reflection on our current direction of understanding? Most of our theories center around communication and signaling having a central role.

The reduced space would also reduce the ability to form new interconnections between neurons, as the shape of the cortex changes to a flatter one. These new interconnections are a fundamental basis for how we generally understand brain activity and how we explain learning and memory.

Take for example Grid Neurones which have been experimentally shown to be located in the entorhinal cortex and which along with Place Neurons (the discovery of which earned the 2014 Nobel prize in Medicine and Physiology) constitute the navigation system within animals and humans [6][7]. The mechanics of Grid neurons are complex but an essential component is their ‘Modular Organization’ (a good article describing this can be found here). This is an example of the ‘spatial’ organization of neurons having a specific and integral purpose. Now I wonder what happens when these also get compressed and distorted in shape, how should it still be possible to have a working navigation system?

If we consider that the 1 mm thick brain is still fully responsible for all the cognitive, conscious and subconscious processes then at-least we have to concede that all these processes (including consciousness) are much simpler and should be easier to explain. The brain then shouldn’t be immensely complicated. This view would also lend support to the idea (which is my personal intuitive inclination as well) that consciousness has more to do with the specific configuration into which a human brains develop rather than being dependant on any specific structural parts and/ or complexities of specialized areas.

Overall this article made me much more malleable in my views. And it goes to show how sometimes especially in social sciences we are biased towards deriving conclusions based on population or summated data. By finding the best fit or a generalizing principle from a collection of individual data, where as overlooking the significance of the individual data itself.

Does that mean we Think with our Hearts?

This still sounds like a ridiculous conclusion, after all the heart as we know is mostly made up of cardiac muscle tissue.

However humbled away from my previous staunch opinions I tried to dig a little into such murky waters and found some very interesting things.

An article published in 2003 in The Guardian suggested, based on anecdotal evidence, a concept of transplanted memories. Apparently, such memories occur in some heart transplant patients. Who develop new tastes or have a change of personality, which are similar to the original heart donor.

A few other concepts of interest are also suggested in the same article. Firstly of the ‘Auerbach Plexus’ functioning as a second brain in the gut. Which may govern emotional responses or ‘gut feelings’. Secondly of the idea that neuropeptides, which are found in the whole body, give a sense of ‘self’ and are carriers of our emotions and memories.

NeuropeptideY 1RON
Neuropeptides may carry a sense of self, memory, and emotions

Now, in general, these do not come across as very plausible to me. At least not the extent of explaining the full picture of what we observe. However, it does seem that scientists and not eastern mystics are the ones who have suggested these ideas and/or are working on them.

I found a good article in Namah Journal (the credentials of which I’m not sure of). It goes into some detail covering much of these alternate ideas and many others [8].  It also does provide a list of references at the end to back-up the claims and hypothesizes.

To sum up, personally, I don’t know whether to accept alternate explanations or whether to look at ways in which neuroplasticity in itself might be sufficient. Either way, it does make a dent in my preconceived ideas and I am humbled by that.

So what do you think?

  1. Roger Lewin (12 December 1980). “Is Your Brain Really Necessary?“. Science210 (4475): 1232–1234.

Operant Learning Shows Bacteria can Imagine! – True or False

Is imagination necessary for operant learning? And if so are bacteria imagining!

Bacteria imagine

This question came from an interesting discussion I was recently having on selfawarepattern’s blog post regarding Consciousness and Panpsychism. The author says:

“..Imagination is the fourth layer.  It includes simulations of various sensory and action scenarios, including past or future ones.  Imagination seems necessary for operant learning.. “

After several replies, I thought it would be a good idea to present this as a separate post here. To be fair the author only extends imagination to all vertebrates having the ability to sense at a distance. But can we take it a few steps further than that?

Operant Learning:

When talking about classical examples of Operant conditioning, we usually refer to the Skinner Box experiments:
In this experiment, the rats bar pressing behavior is the ‘operant’. The consequence of which is a food pellet (positive reward) and this acts as a ‘reinforcer’ for the preceding behavior. If the reward is given every time the bar is pressed (called continuous reinforcement) then learning is taking place based solely on the behavior (operant) and its consequences (reinforcer). This is not based on imagination but only on actions (behavior) and reactions (consequences).

Skinner box scheme 01
A good explanation can be found here:

One of the commentators (Paultorek) argues:

“…research has analyzed the brain activity of rodents trained in such tasks, and finds that when they are (by the above hypothesis) anticipating future results, memories of the past experiences are being activated… “

Evolution du cortex prefrontal
However I argue such behavior is not limited to humans and vertebrates, but almost all organisms including protozoan and bacteria. The only conditions are the ability to change the environment and having a goal, which for the bacteria can be only brute survival.

Referring to brain activity analyzed in rodents during such behaviors, the biggest issue is that their brains are not the same as ours, so how do we know they are imaging like we are?

In the general sense processing of such learned behavior happens in the bacteria, the rodent and in humans. The processing in bacteria is simpler than the rodent. The rodents processing is simpler than the humans. But they all occur using chemical processes.

So if we can extend the courtesy of imagination to rodents why not extend it to bacteria as well? My opinion is that we cannot extend this courtesy at all!

Take Gambling for example:

Pompeii - Osteria della Via di Mercurio - Dice Players
Gambling machines are a good example of exploiting operant conditioning in humans. When the gambler’s gambling activity leads to the occasional reward, the gambling activity is reinforced. Yes, one could say that the gambler can imagine getting a reward but that’s not what’s driving his behavior. It is the reinforcement that drives the behavior and imagination is entirely separate from this contingency.

This is because the gambler can also imagine NOT getting the reward which is in truth the most likely outcome that he is suffering. Such imagination however usually does not reduce his gambling behavior.

The pull of gambling (using operant conditioning) is opposite and it is an uphill battle to resist this. This can go to the extent of becoming a disease, now formally recognized in the DSM 5 as ‘Gambling disorder’.
So as far as operant conditioning goes there is no role of imagined outcomes, only of outcomes. Any imagination that happens is separate from this contingency.

  2. Huitt, W., & Hummel, J. (1997). An introduction to operant (instrumental) conditioning. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved from,
  3. C.F. Lowe (1985). Behaviour Analysis and Contemporary Psychology. Retrieved from

8 Criticisms of Christof Koch’s Consciousness + Panpsychism

I’m watching a TEDx talk by Christof Koch ‘The Scientific Pursuit of Consciousness’. Christof Koch is a prominent Neuroscientist and says that since he was a child he had always thought that dogs have consciousness as they seem to display emotions such as hunger, anger, and anxiousness. Appalled by the Christian view that only humans have souls and go to the afterlife, he always thought there must be some place for dogs too. Tackling with these issues, he came across the ancient concept of panpsychism which he feels answers the question of consciousness.

While watching I found myself doing a running criticism in my head and thought I’d put it on paper to see if there’s anything I’m missing: (I have presented my criticism chronologically to his speech below)

He claims there are three strong arguments in favor of panpsychism:

  • Biological
  • Metaphysical
  • Aesthetic

Biological Argument for Panpsychism

Starting with the biological argument. He says that as we all have a similar brain and language ability we generally agree that each of us has consciousness. Although none of us has the ability to know the subjective experience of anyone else. Extrapolating from that we extend this to human children and Infants.

We can even extend this to some animals such as Apes, elephant, and dolphins. He argues that a lot of these animals perform certain tasks which if a human was doing we would assume consciousness.

#1. Criticism: Although generally, we can assume other humans have consciousness, it is true to say we can only truly be sure of our own consciousness. Given this paradigm, it becomes more difficult to extrapolate this to less similar organisms. We cannot easily claim that neonates have consciousness (taking the strict experience of consciousness that we feel), it would be up for debate. To extrapolate that onto animals, even smart ones is a big leap.

Describing the human brain Christof Koch says that there isn’t a significant difference in any of its constituents from that of other animals. That as far as size goes we are not unique, the size of an elephant’s or dolphin’s brain is much larger. Anatomically there isn’t a big difference either and that only an expert neuroscientist under a microscope can differentiate a human brain cell from a monkey’s brain cell etc. In fact, he states:

” Neuroscientists have been unable to identify anything singular exceptional about the human brain”

#2. Criticism: Given he is a neuroscientist I found this statement to be very shocking. A big difference is the, brain : body ratio, which is far higher in humans than any other animal. It is generally quoted as a measure of intelligence! but I’m not taking that line of argument (What difference should it make if an animal’s body is big as long as their brain is similar?).

Another area to consider is the neocortex (the latest part to evolve) which is significantly larger in humans (but also in certain dolphins, in fact even larger than humans!). The problem here is that this line of reasoning does not lend support to Panpsychism whatsoever.

Long-finned Pilot Whale (6002011771)
The Long Finned Pilot Whale has the most neocortical cells in all mammals studied so far, including humans!

To say the human brain isn’t unique hence consciousness must be more prevalent is very ambiguous. The brain is certainly unique in that it is a human brain and not that of another species. This is due to an accumulation of some unique and rare feature plus a lot of much more prevalent feature. But this exact configuration is only human.

Focusing on the forebrain, for example, not many animals have this area significantly developed. And if it were to be considered a marker for consciousness then perhaps we could argue certain animals with large forebrains are conscious. But we cannot claim consciousness is hence universal. All we have done is included a few more to the conscious group.


He also argues that language is often used as an argument for consciousness. He sees this as inherently biased because as he states:

” Interestingly this excludes all other species from possessing consciousness except us.. In fact, it was meant to do that. A species has a radical desire to come on top of any ranking” (the crowd giggles to this)

#3. Criticism: This one is the hardest to digest. He kind of makes fun of this and just brushes it under the rug. My opinion is that language has a big role to play in consciousness and it is worth addressing at least. To say that it was meant to exclude other animals doesn’t answer or add anything. I for one am not a ‘speciesist’ and don’t have any beef against other animals. What I’m interested in is consciousness and if language plays a part in it or not. After all, it is a uniquely human ability and we know for certain humans are conscious.

language consciousness

I also take contention with claiming consciousness is the ‘top’ of some ranking. Why is it the top? (stating such things is speciesism as far as I’m concerned).

I’ve had a good discussion about the role of language in the comments section of blog post:

In summary: Using language we can define things in the outside and inside world. What is in the ‘real’ can only be experienced, but in order to be aware of it/ think about it/ or talk about it, we need to be able to describe it. This description is the function of language. Without the use of language, what is the difference or similarity between a tree, a stone, a bird or anything else? (all of these are words of language).


Going on he states that there is no demarcation between us and any other animals (from worms to Apes). Hence it is ludicrous to suppose that we are exceptional in consciousness.

#4. Criticism: Except none has language. Apes have a smaller forebrain. Worms don’t even have a proper brain. So it’s not ludicrous at all. (On the other hand, it does seem ludicrous to state that the internet or an electron is conscious)

Metaphysical Argument for Panpsychism

Siding with Buddhist philosophy Christof suggests it is much more rational to assume that we are all children of nature, that all multicellular organism possess some degree of consciousness.

He sees panpsychism as a very clear and coherent intellectual framework. Starting from Descartes ” I think therefore I am” Christof asks how do we get this subjective experience?

If a larger brain such as ourselves has the feeling of a throbbing headache where did this come from if it wasn’t already present in more simpler and smaller brains.

Building from this he proposes his panpsychism view:

” All complex systems have consciousness”

” All complex systems have two surfaces. An exterior surface which is accessible to everyone and an interior surface which is subjective.”

So consciousness is imminent in the universe such that any highly organized matter will bring with it the experience of consciousness. And as per the pan-psychist view consciousness is a fundamental property of the universe. Similar to energy, time, matter etc. And in that, no further reductionism can be done. 

#5. Criticism: If Buddhism says we are all children of nature, I may agree, but does that say anything about consciousness? I doubt it does.

Meditation panpsychism consciousness

Christof Koch differs slightly here in saying that consciousness only comes with highly organized matter (in other words it is not pervasive). I still fail to see how this answers anything, it feels more like a cop out for material science.

Does he feel there is some type of matter which is NOT conscious? (which is a bit contrary to Panpsychism). If so where is the demarcation between conscious and non-conscious matter? This is a self-defeating view to have.

Why should all complex systems have consciousness? What do we mean by complex? Where is the evidence that they possess consciousness? What can we exclude as being a complex system?

An exterior and interior surface, huh? That’s saying the same thing in different words, it doesn’t explain anything further. It’s the same as saying complex systems have subjective experience. The glaring question remains – Why?? and what’s the evidence??


The aesthetic argument for Panpsychism

He states that these theories of Panpsychism can be very precisely explained in the language of mathematics. Using Tononi’s Integrated Consciousness Theory. Which, in summary, states that any integrated system has a degree of consciousness and this can be calculated as ‘Phi’.

We cannot see something apart from it being as an integrated whole, for example, you cannot be conscious of a cat without the integration of its colors and shapes. That is you cannot see it in black and white no matter how much you may try. This is related at the neural level via a complex process of association between many neurons. And if this integration at the neuronal level disappears for example in a seizure or during deep sleep the conscious experience disappears as well.

#6. Criticism: I agree that we only perceive things as an integrated whole and integration as we understand it does take place in the brain. I find this to be interesting and feel it may teach us something about consciousness. But does it fully account for consciousness?

1604 Types of Cortical Areas-02

The stance here is to find a generalizing principle (integration). But what of the specific configuration of the human brain areas. What about our unique ability to use language (as discussed above). What about unconsciousness, there are many integrated processes happening within our brain that we are not conscious of.

So why should integration equate to consciousness?

Taking an analogy: If we take away the bonds between carbon atoms we cannot have a diamond and it is only through having bonds between carbon atoms that we can get a diamond. So is what makes a diamond, the carbon bonds only? Because carbon bonds can also form graphite and graphene. It is rather the specific configuration that makes it a diamond.


Using the theory he attempts to tackle Qualia. For example colors, Christof says that according to the integrated Theory:

every color (or any other qualia) is associated with a specific geometry in a ‘hyper dimensional qualia space’. It is this geometry itself which is the experience of consciousness. We don’t Experience the outside world but only states of our brain (this geometry is that state).

He lends further support to this theory by taking the example of the cerebellum. The cerebellum has 2/3rds of the total brain neurons but even if it was to be destroyed it would not affect consciousness at all. This he postulates is because the anatomy of the cerebellum is very simple compared to rest of the brain. In the rest of the brain, the connections are much more complex and integrated.

Furthermore, this theory is being used clinically to assess the level of consciousness for Comatose patients!.

#7. Citisicim: Here again I see another baseless claim. The geometry triggered in hyperdimensional ‘Qualia space’ is what we perceive as the perception? Other than being a statement with no backing I don’t see any value in it. When one uses the same word (Qualia) to describe itself, you know we are getting nowhere:

Q: What is a flower? Ans: “it is a flower in a field of flowers”.

Inverted qualia of colour strawberry

The cerebellum argument, in fact, goes against the original claim of integration being integral to consciousness. Because, although minor, there is still a significant amount of integration that happens in the cerebellum (yet apparently no consciousness).

The notion that this is being used clinically is alarming as well because potentially very significant decisions can be based on this theory. The theory itself I feel is unsubstantiated. I have mixed feelings to the extent that it can serve the purpose of reducing anxiety and aiding decision making which in itself is something to consider. The soundness of these decisions is another issue.


There is no distinction whether the integration takes place in brains or Silicon circuits. Any other integrated matter such as computers and more importantly the Internet which has in total to the order of 10∧19 transistors (much more than the synapses in a human brain), could be conscious and it is interesting to think about this.

#8. Criticism: This notion is similar to the global brain hypothesis which I touched upon in my post on Cybernetic Epistemology. Following on from my above criticisms this claim is another extrapolation from a baseless theory. It, in fact, serves as evidence against the theory because as far as we know the internet is NOT conscious and we are. Even though we can only truly know our own consciousness we all still agree other humans are conscious, this is based on our own evidence. But what evidence do we have for the internet being conscious? None.

This in a way shows what happens when we try to cop out material sciences. We should, in fact, conclude that as the internet is an integrated system and it is not conscious, integration alone cannot in itself explain consciousness.


Do share your thoughts in the comments section below 🙂

Panexperientialism Vs Panpsychism Vs Animism – Outlined in a Table

I have been doing some reading and research around these ideas and it seems that Panexperientialism is often confused with related ideas of Panpsychism and Animism that have been around for a long time. For the sake clarity I have attempted to summarize the differences in the table below (further explanations, especially with regards to Panexperientialism, can be found below it):

[table id=1 /]


Panpsychism is the view that everything in the universe is conscious. That everything from the smallest such scale as quantum particles to the largest such as galaxies and in fact the whole universe possesses consciousness.

Panpsychism can be thought of as an umbrella term and sometimes Panexperientialism is thought of as being within this umbrella. Panpsychism is a philosophy based on the notion that you cannot arrive at consciousness without consciousness being fundamentally present, to begin with.

That is to say that, no matter what you do with materials that are supposed to be inert (as scientific materialism proposes), no matter how complex they get or any structure they arrange in, you can never arrive at consciousness.

Hence consciousness must be fundamental and everything must have a consciousness of its own. Implied in here is that there is a subjectiveness to being anything, e.g. an electron. That there is something like to be a bat and there is something like to be a mountain (although there are nuances here).

This is an interesting concept and is gaining a lot of popularity recently with many eminent philosophers and scientists starting to warm up to this philosophy. I intend to discuss this in more detail in a separate post.


Animism is a spiritual concept and it is the oldest known belief system in the world. It is much similar to Panpsychism although slightly different. This is the belief that everything, including inanimate things, has a spirit. The sea has a spirit, so does the wind, the forest, the rocks and the moon etc.

In a way, there is an implied uniqueness to each of these spirits and an identity. This is different from Panpsychism where the focus is on consciousness only. That at an elementary level it should be similar and pervasive for all things (i.e. there is no uniqueness). Although in Panpsychism as well, there can be various arrangements and aggregations for this consciousness. In itself, it does not imply a purposeful consciousness of say, the river as would be considered in spiritual animism. Panpsychism would rather view the river as an aggregation of many smaller consciousnesses.


The term Panexperientialism was coined by Philosopher David Ray Griffin in the 1970s, to capture Whitehead’s metaphysical world view.

Panexperientialism does not claim that inanimate materials, be it molecules or rocks, have a consciousness or have a spirit. It does not make any claims to consciousness at the quantum level either. Although one could derive theories within its metaphysics of how consciousness could be arrived at. I feel it’s best to read Alfred Whitehead’s original ideas to get a clearer picture of what his philosophy was. A good paper to read is ‘PANEXPERIENTIALIST PHYSICALISM AND THE MIND BODY PROBLEM’ by David Griffin. For a quick overview, you can read the Alfred Whitehead page on Wikipedia.

I have tried to summarize Panexperientialism here:

In essence, Whitehead states some very clear and obvious facts which have been denied by classical scientific materialism. That is that the assumption that a material thing continues to be the same throughout time is false.

In materialism, we believe that fundamentally things remain the same and any change is only secondary. For example, “Sarah became obese after steroid treatment”, here we assume Sarah has an identity which continues to be the same throughout time and any change (obesity) is a secondary thing. That’s why we say ‘Sarah’ became obese, assuming Sarah remained the same.

This assumption although useful for language does not have any justification on its own. Rather it would be truer to say that things are always in a state of change.

The thin Sarah and the obese Sarah are in reality fundamentally completely different. This isn’t only because she became obese, in fact looking at it more closely we realize, most of her cells would have been renewed after some time. Even more fundamentally all the molecules in her body have changed in many variables, for example, that they are not at the same place or time anymore.

So what makes Sarah, Sarah ?. It is, in fact, an abstract idea we have imposed. In fact, we can see that all identities are abstract metaphors we assign to them.

So an electron is not a stationary, brute, defined material which travels in space and time and which only reacts to external forces (secondarily to it being an electron first). Rather it is something that is constantly changing in an unpredictable way, in relationship to other similar changing things.

It is the interaction they have with each other as a whole which defines them. An electron has an unpredictable nature you can call this a creative nature which is bound within the limits created by all the other unpredictable things. Hence they define each other and give meaning to each other. In fact, it is only relative to each other that they exist because if something does not interact with anything else then it cannot be observed in any way, hence it could be thought of as not existing at all.

This is the concept of Panexperientialism which proclaims that nothing is inert, that no material has only external causes acting on it, as is the view of scientific materialism. Rather everything has a will of its own, an unpredictable nature of creativity/ possibility and that it exists in relationship to everything else in a state of flux.

This experiential flow exists in the moment of time and as the moment of time moves on the previous moment can be seen and measured. In a way, it has become concrete, because it is now in the past.

This begs the question, who is experiencing ? as an experience must have a subject, hence subjectivity and consciousness?

To make his position clear Whitehead coined the term ‘Prehension’. Prehension in short means unconscious experience. Whitehead divides this further into two types 1. Physical Prehension (causal efficacy) 2. Conceptual Prehension (presentational immediacy), More on this can be found in Whitehead’s writings and on his Wikipedia page.


I hope this was helpful. Don’t forget to leave a comment below.

Cybernetic Epistemology: How machines gain knowledge

The two seemingly different domains of cybernetics and epistemology have an intriguing fusion in the concept of cybernetic epistemology.


Cybernetics we know is the study of self governing and self-organizing Systems. See Cybernetic Theory.


Epistemology is a branch of philosophy dealing with the nature and scope of of knowledge. So for example in epistemology we ask questions like what knowledge is, how it is acquired, what are the structures and limits of knowledge, what makes beliefs justified and whether justification is internal or external to our mental experience.

Cybernetic Epistemology

The key figure to link these two domains and formulate the concept of cybernetic epistemology was soviet theoretical physicist and cyberneticist Professor Valentin Turchin.

He was a pioneer in the development of AI language and developed one of the first artificial intelligence languages REFAL. He also developed the theory of MetaSystem Transition.

MetaSystem Transition:

This is a view which states that the integration of elementary objects leads to an evolution of higher level organisation and control i.e. an emergent phenomena occurs. Pertinent examples of this are the origin of life and the origin of symbolic thought.

In an article he wrote in 1993 Valentin Turchin stated:

“it was his belief that to succeed in transferring to the machine our ability to understand natural language it is necessary to break down the meaning of the language into some Elementary units”.


Valentin Turchin had a stance on epistemology (which I incline towards as well) called constructivism.


The constructivist view is that learning and acquisition of knowledge is a constructive process. We construct subjective representations of objective reality by personal experience and hypothesis testing. New information is related to and compared against previous knowledge. That no one is a blank slate when acquiring new knowledge. Hence everyone has their own unique construction and mental representation of the world.

He was also a proponent of the global brain hypothesis and that is where the above ideas related to the concept of cybernetic epistemology kind of tie in together.

Global Brain Hypothesis:

global brain

What this hypothesis suggests is that the increasing use of the Internet connects people all over the world together. In a way this connection and communication serves a function similar to neurones in our brain. And by this integration there is an emergent phenomena that is happening i.e. there is a super intelligence present at the Planetary level and we are only components to it.

What is cybernetic Epistemology then ?

What cybernetic epistemology is asking is the nature of knowledge in integrated machines / artificial intelligence or super intelligence. What it means to know something that is outside the human experience and is in these systems. What is the structure, justification and limits of this knowledge. Is it similar or different from our own. Are these integrated machines able to construct their own representations of the external world by the means / senses available to them.

By developing the philosophy and understanding of cybernetic epistemology we might be able to better understand how integrated machines understand things and how they might use this understanding. This might be important in the development of more intelligent and human like language abilities.

A Quick Explanation of Cybernetic Theory With 2 Examples

Cybernetics is the scientific study of self-governing / self regulating systems. Cybernetic theory was first conceptualized by Plato as the study of self governance by people. Nowadays cybernetics is generally used in context to artificial intelligence. However cybernetic Theory can be applied to a multitude of other disciplines for example biology and even psychology.

Biological Cybernetic Theory Example:

The concept of homeostasis in human biology and biology in general is in essence the concept of cybernetics. For example the body wants to maintain the core body temperature at 37 in degrees Celsius. Whenever the temperature rises above this threshold, sensors in the human body pick this up. They relay this information to the thermostat area in the hypothalamus of the human brain. Which recognizes this and then sends signals to the sweat glands to produce sweat so that the body will cool off. When it comes back within range the hypothalamus stops sending these signals. In this way without any external intervention homeostasis of body temperature is maintained. This is exactly what cybernetic Theory studies.

Artificial Cybernetic Theory Example:

From the study of cybernetics we attempt to create such systems in engineering and in artificial intelligence. A simple example is the cruise control option in relatively newer cars. Here again we give the system a target speed, say 60 miles/ hour. If the car due to a slope / air resistance / friction starts to slow down or speed up it automatically triggers sensors. These then automatically either increase or decrease the force on the accelerometer depending on whether it needs to increase or decrease the speed to achieve the target. This keeps the car at a steady pace, with no human intervention needed to do so.

Cybernetic Theory often gets confused with artificial intelligence sometimes even used synonymously and interchangeably with it. The difference between these two is elegantly summarised by Paul Pangaro on his website. He basically explains that artificial intelligence grows from a desire to make computers / robots / software more intelligent and smart whereas cybernetics grows from a desire to understand and make systems that are able to achieve defined goals.

Imagine A World In Which Humans Talk To Machines

Can you imagine a world in which humans talk to machines? Like we’ve all seen in so many sci-fi, futuristic movies, and games. These could be droids like C-3PO from Star Wars or the bartender from Passengers. Virtual artificial entities like in Mass effect or holographic representation like in the movie The Time Machine. It would be epic and cool, to say the least.

We are beginning to Imagine a world in which humans talk to machines:

This is exactly what artificial intelligence research is trying to do these days. There are a number of different AI programs that are already doing just that, for example, we all know Siri in our iPhones. Another one gaining prominence is Amazon Echo.

There is another interesting one I found online, this one is made by Existor Technologies. They have a number of interesting bots that you can talk to directly, these have responsive human Avatars that try to have a conversation with you. In fact, one of them is actually a chimp !. I’ve played around with them a bit and they’re pretty impressive.

Although I think we’re far from a world in which humans can actually talk to machines but the progress made so far is remarkable. The difficulty is when we speak in a more creative and intuitive sense. I find that these artificial intelligence programs have a big problem trying to understand this. It’s easy for the machines to understand simple language which has been pre-programmed into them. In fact using deep learning and contextual artificial intelligence they even go a step beyond that. This is done by learning the normal context to human speech and becoming self-learning (for more on this see my interesting article on Artificial Intelligence Basics).

But this does not seem to solve the problem (not yet anyway). I think the biggest issue is these machines have is that they cannot think in abstract and symbolic ways which like we do. It would be something to look forward to though if it were ever possible. So none seems to pass the turing test as far as I’m concerned. Not yet anyway.

(Photo credit: Brother UK)

5 Artificial Intelligence Basics That Will Make Your Head Spin

So I’m going to start from the ground up, in-order to build an object that is intelligent what are the basic requirements I need to achieve? we will call these the artificial intelligence basics. Looking at the concept of artificial intelligence, there are many confusions, so for clarity let’s divide what I want to build into 2 broad categories:

  1. The practical / real life scenario. We’ll call this “Artificial Intelligence Basics – Lite
  2. The idealistic scenario. We’ll call this “Artificial Intelligence Basics – Ultimate

My curiosity and spirit of discovery wants to achieve the second scenario, by which I mean a complete artificial intelligence that can have it’s own internal (mental) existence. There are questions as to what that might entail from a moral perspective or from an existential standpoint etc., however I leave that discussion for a later post in this series. The first scenario is what we are actually building in the world today or may build from a practical standpoint for example what Google, Microsoft, Facebook etc. are working on. Let’s consider each of these now:

Artificial Intelligence Basics – Lite :

What Google, Microsoft and Facebook are doing in the field of artificial intelligence is remarkable, however as the field advances it becomes more and more complicated. Now if you do a search for artificial intelligence or machine learning projects that big companies are working on there is a barrage of sophisticated terminologies to face such as deepLearning, deepFace, deepText, neural networks, embodied agents, symbolic and sub-symbolic AI, brain simulation and cybernetics etc. To simplify things, we are going to stick to the basics of artificial intelligence in this article. There is no hard and fast rule but I’m going to try and summarize the 5 basics of this Lite version first:

1. Information Processing / Problem Solving

As discussed in my post the Concept of Artificial Intelligence, this is kind of the core of what computers can do. In humans, this skill is part of the executive functions that is accomplished (theoretically) in the dorsolateral prefrontal cortex and parietal cortex of the human brain. In computers this is done by the use of logic circuits, to do mathematical calculations in binary. The CPU computes these calculations. The RAM helps by storing a piece of information temporarily, while the CPU is processing other parts of the calculation, much like working memory (another executive function in humans). The ROM or hard drive is where information is stored for long periods until edited much like long-term memory in humans (which is not an executive function).

So every computer in a way is artificially intelligent in the sense that it can do human like processing and problem solving of well defined mathematical issues. An ability to convert practical issues into a mathematical one can enhance this one step further. This is where the next three ideas come in i.e. recognition, understanding and communication.

2. Recognition

artificial intelligence basics

Recognition works on many levels. Let’s take an example: I want my virtual assistant to find me a shop, with good ratings, that is near me and sells wheelchairs. To make it worse I don’t want to type this in, I rather want to talk to it like a person. How will my virtual assistant be able to solve my problem ? Being able to recognise what my spoken words are is the first step, this is where the speech recognition technology is progressing (Apple and Google are at the forefront of this).

Now just being able to capture the sound wave patterns and relate them to a database of previously capture sound wave patterns isn’t enough. Yes by doing this the software can recognise what each word probably is but that doesn’t mean anything on it’s own. It just relates one sound pattern to other ones which seem similar hence the only thing gained is a differentiation from other patterns (a classification).

3.  Understanding (?Comprehension)

artificial intelligence basics

This is one of the entrances to the rabbit hole of AI research. Up till the previous step the assistant is only dealing with a set of differentiated information i.e. ‘shop’, ‘good’, ‘near’, ‘me’, ‘wheelchairs’ are all separate things but other than that there is no sense to them. So how can we add meaning to this and start making some sense ?

One approach can be to use external references and databases. For example ‘shop’ could automatically link to all items tagged shop in google map’s database, similarly ‘wheelchair’ could link to all data tagged wheelchair. ‘Good’ can have a value of say 7 on a 0-10 scale and a database of user ratings can act as a reference to compare this against. ‘Me’ can automatically refer to the speaking object itself. Near can have an arbitrary value of say 0.7 miles and link into a geo-positioning database.

Think of it as a filtering mechanism, to start with the assistant has billions upon billions of options, now after recognising the words the filters start to be applied. Firstly ‘me’ automatically gives the geo-position of the speaker, then ‘near’ automatically gives a radius of all items on a database (or even multiple databases e.g. one from Google another from Bing). Then ‘shop’ filters the results to leave the items in the shop category only ( it can also use related words like market, store, bazar, plaza, outlet etc. if programmed to do so). Now ‘wheelchair’ filters these even further so that only the shops/market/stores with wheelchairs as an item remain. ‘Good’ narrows this down a bit more because only shops above a certain rating remain now.

This is a very simplistic version of what might be possible of-course. I would be cautious in using the word ‘comprehension’ here because this example was a straight forward query, what would happen with a more abstract interaction or without an already available database. There is no comprehension going on here of the subject matter, rather a complex automated response. However other models do attempt at this by modelling symbolic artificial intelligence systems and contextual learning systems.

4. Communication

This is more easier to grasp, in the above example the virtual assistant has been able to come up with an answer. Now it has to convey that information to me. A simple way can be to use a generic sentence derived from the words in the original question plus the answer. Something like “A good shop selling wheelchairs near ‘you’ is ‘X'” (all the words are same except changing me to you and adding in the shop’s name). Synonyms and other contextually learned sentences would improve this further and make it more realistic.

The other aspect of communication is developing programmes which feel natural in how they speak, this area seems to have made great progress over the last couple of years.

5. Learning

artificial feedback learningThis one is another tricky one to grasp at first, how can machines learn ?

It’s easier to understand if you look at the recognition example above, by classifying similar sounding sounds together the software is in essence creating a database. If that database is programmed to change slightly whenever a new similar sounding sound enters (meaning another sample is available to use as a reference), learning is in effect taking place.

Another way learning may be effective is by recording the sequence in which words are in normal use. By this an artificial intelligence can continue to learn the normal structural relationship between words, so it is less likely to make errors.

Experts use the word ‘Deep’ learning in general when the system can correct itself and in turn make itself better. For example in the sound recognition example above, the system was using it’s original database to compare the new sounds to. If it makes the right guess then the algorithm was right and reinforces itself. However if it makes a mistake it can analyse the word again, to tweak it’s own code, so that next time it wouldn’t make the same mistake. This may be done by giving more importance to a certain signature which it was previously giving lesser weight.

Another thing is to look at the context and learn from that. For example lets combine the above two examples of recognising words and learning about sentence structure. If the intelligence recognises a word which doesn’t fit the usual context (which it is learning from the sentence structure data), then it can identify the most likely error. Now it can try to match that wrongly identified audio pattern to other ones down the list of likely matches sequentially and see which one best fits with the context. This gives it a better chance of matching the right word. If done successfully it can incorporate this information into it’s original algorithm.

Artificial Intelligence Basics – Ultimate :

basics of artificial intelligence

Above we looked at where we are and some aspects of where our synthetically intelligent systems are progressing. Now let’s look at what our aspiration (wrongfully or rightfully) for an ultimate artificial intelligence are. What artificial intelligence basics would lead to the development of such an existence. Here I’m briefly going to consider 5 characteristics which I feel would be needed or would be adding something valuable:

1. Language

By this I mean use of symbolic systems and metaphors. The ability for these to evolve further as happens in human language.

2. Imagination

This does not mean the ability to plan, as that can be done by learning as well. An ultimate artificial intelligence should be able to have an inner mental world where it can combine abstract concepts.

3. Creativity

Following on from imagination, another ultimate artificial intelligence basic should be the ability to create unique ideas either from combing abstract concept or by an unconscious process or by inspiration.

4. Consciousness

There are many descriptions of what consciousness is, and it can get a little confusing. To keep it simple let’s just think of consciousness as what all of us intuitively know it to be. The ability to know that we know and that we exist.

5. Phenomenological experience

This ties into consciousness as well. However what I want to stress is an ability to have feelings and emotions. I feel this is perhaps the most difficult thing to comprehend. How might an artificial intelligence being be able to have feelings and emotions such as pain, pleasure, sadness, grief and happiness etc. ? These appear to be unique and indivisible qualities.


To conclude

I feel the artificial intelligence basics that we discussed for the Lite version will lead us probably to being able to cover the language part of the ultimate version. There could be some debate about imagination, creativity and even consciousness. As far as feelings go I don’t think there is any doubt that we aren’t making any progress in that domain.

The thing is we don’t understand these phenomena very well, we are more used to experiencing them. One has to question whether such things are in fact open to scientific analysis at all.

The artificial intelligence basics stated in this article are my personal opinions and insights. I formulated them intuitively and by doing a little research. If you feel there are some pertinent artificial intelligence basics that I missed or if you have any other comments please use the comment section below.


First Let’s look at the odd concept of artificial intelligence

Simply put the concept of artificial intelligence is strange in that it is counter-intuitive. I believe in general we have started to formulate certain ideas of what it means to be intelligent at the level of software and machines which are erroneous in their essence. This is because we get carried away by the term ‘Intelligence’, which we normally conceptualize in it’s human form. It’s the same type of mistake (albeit differently) that we make when trying to think about animal intelligence.

For example when we consider ‘intelligent’ animals such as dolphins, monkeys or elephants, we characterise them intelligent based on several parameters a few of which are:

  • Problem solving skills
  • Awareness of self
  • Ability to plan for the future (?imagine)
  • Level of communication skills (not language per se)

Based on the animals performance on such parameters we can label their level of intelligence. The mistake we make is when we start to formulate their intelligence in terms of human intelligence. The biggest problem is that animals do not have the ability to use language like humans do, now this may seem a small matter at first, however when given proper thought it is probably the biggest difference there is. Without this ability they are likely unable to use symbolic and abstract systems of thought (which challenges any human like ideas we create of their state of being as we are unable to process our thoughts without relying on these systems to begin with).

This is especially problematic when scientists at times try to infer that a bird’s ability to plan for the future on certain experiments shows a level of imagination, because imagination as humans understand it is based on abstract and symbolic thought processes.

A discussion of 3 ideas regarding the concept of Artificial Intelligence:

1. Ability to solve problems and communicate with humans

Every computer even a calculator can solve simple mathematical problems which a human asks it to. If you take it further it is possible to formulate other issues into mathematical ones and solve them easily enough as well. Fore example I want google to tell me the best restaurant near me, it can formulate this by the use of 3 predefined human assigned parameters:

  1. What ‘near me’ means (i.e within 1 mile) 
  2. Your geo-positioning parameter (latitude and longitude)
  3. The user generated numerical ratings of restaurants in the google maps database

Google can calculate an answer for me using these three numerical parameters. It can tweak it further in innumerable ways to give more meaningful information. Now using variations of a generic sentence like “the best restaurant near you is ‘X’ which is ‘Y’ miles away” google could now replace ‘X’ with the name and ‘Y’ with the number of miles and you get a sensible answer. Furthermore Google might add a text to voice software (which is a sub-field in artificial intelligence) and make the phone talk to you.

If we conceptualise artificial intelligence in this sense then it seems sensical as in essence it’s just a mathematical problem solving software at a more complex level tweaked with human friendly attributes.

2. Machines can ‘think’ on a human level spectrum

By this we mean artificially intelligent machines are able to ‘think’ and this thinking is at a certain level, either currently at a cat’s level, baby’s level etc or human level or even super-human level. Conceptualising synthetic intelligence in this way seems mistaken because there are a number of problems:

  • Human thought process is what we imply by the notion of think. This is only possible for humans not even animals as far as we know and as I discussed above. We aren’t able to forge the concept of thinking without the human reference of what thought is.
  • A spectrum implies a quantitative difference between the various levels (e.g cat, baby, human, super-human). This does not seem to be the case as it appears more of a qualitative difference in how cats are and how humans are. Much is still not understood about how babies develop adult level thought but one of the most recognised one is the Cognitive development theory of Jean Piaget. According to this the cognitive development does not occur gradually but in sudden qualitative jumps.
  • Super-human seems possible in many ways such as speed of calculation etc. but it would again be wrong to put distinct super-human feats on a spectrum as if it were a continuum to humanness.

Concept of Artificial Intelligence

3. Artificial Intelligence can feel like a human does

This one is even a step further than the previous. As far as we understand humans can solve problems and so can machines, so there is some similarity in that respect, it is a big stretch to think that this automatically means machines can think, however at least there is something we can relate to.

As far a emotions and feelings go, there is not even a remote link we can equate with machines or softwares. We can have virtual characters which can show emotions in video games or even robots which seem to have emotional capabilities. This in-fact has been programmed into them using predefined mathematical rules i.e. they are designed to give us an image or animation which we are then able to relate to an already recognised emotion by us. The emotion itself is only in our own (human) repository, the character is similar to a picture you can draw on a piece of paper and has no emotions of it’s own. Emotions / feelings such as pain, happiness, worry, desire etc are phenomenological experiences or qualia which I feel cannot be understood by breaking them down, they are only uniquely understood from everyones own subjective experience.



The issue with the concept of artificial intelligence is that if you look at it on the deep down level most machine intelligences are working on mathematical functions / logic circuits etc and we can increase the quantity of such functions to immense levels (by far surpassing human abilities to perform such tasks) but it does not change what they i.e. an increasingly bigger and bigger collection of logic gates. Logic and maths is only one aspect of human nature and by being able to emulate human like characteristics such as speech or emotions etc. does not imply humanness. In fact it is more like a means of communicating with humans because the interpretation of such speech or emotions occurs within the human similar to seeing an emotion provoking picture (the picture here is the means and not in itself able to feel emotion).

Now I don’t feel this is the whole story in respect to the concept of artificial intelligence as there are some interesting aspects such as language ability and consciousness to consider which deserve more focus, so I will attempt to discuss these in this artificial intelligence series and then the consciousness series later on.