We watched the film 'Stealth' last night – a tale about an advanced unmanned fighter plane in the not so distant future. It's not a bad film – lightweight Hollywood action pic really, but unlike some films of the same genre, it made us both think – which is quite unusual for hollywoodian stuff. This here plane had an AI system and thus learnt as it went along – he trouble was, it learn both bad and good things from its human 'companions', which it copied rather too well. I won't ruin the story, should you wish to see the pic, but it made me ask myself the question 'How do you, or how does a machine teach itself the difference between 'good' and 'bad'? and 'How can you define the concept of 'morals' to a machine?'.

Stop reading, start speaking
Stop translating in your head and start speaking Italian for real with the only audio course that prompt you to speak.
You could go much deeper than this and wonder how on earth a machine would teach itself these concepts, or indeed, would it ever arrive at such ideas on its own, after having observed all that goes on around it and learnt. You could 'bring up' a trainee robot in the correct family environment, or something like that so it could learn from its human counterparts, but would that mean that a robot brought up in an environment which helped create a serial killer may create a robot serial killer? Who knows.
Of course, as we have seen from the movies, giving robots fully autonomy is not generally a great idea, because they often start acting like humans and start having crazy ideas about becoming presidents, or world leaders and the such. Well, that's how it happens in books and films, but would it really happen in real life? Good question. Could robots become more humane than humans? Frightening question. Would robots start wondering about how the universe came about? Interesting question.
Funny what you think about on Monday mornings.
Break over. Back to work.
Bye.