Book Review- Why Greatness Cannot Be Planned: The Myth of the Objective

25670869. sy475
Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman

Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman

My rating: 5 of 5 stars

This is one of the most thought-provoking books I have read. And I could relate even better because I work in the area of AI like the authors and many of the examples and the final case-study is from this area. Also it’s inspiring to see such deep insight coming from the tool they built called Picbreeder.

The core idea presented in the book is that “objectives” are unnecessary and rather a hinderance while solving “complex” problems. Objectives might be useful for simple problems where it is easy to chart out a path to the goal. However in case of more complex problems where the route to the goal is not known, objectives cause more harm than good.

Next, an alternate model is presented called the novelty search. It finds solutions based on how “interesting” the solution is rather than how close it is to an “objective”. The authors connect this idea to the stepping stones, which is a great analogy. They argue that a complex goal lies somewhere in a hazy lake with a very low visibility. It is difficult or impossible to reach a specific goal in this lake and an objective in this case is like a “broken compass”. However, it is much more relevant to explore the lake by finding new stepping stones based on novelty or interestingness and how many new stepping stones it can lead to. True success lies in exploring the space of the lake rather than trying to reach a specific imaginary point.

Finally, a variety of use-cases are discussed from education, innovation to evolution and AI where a novelty based approach makes more sense rather than an objective based approach.

This book presents an intriguing idea which can literally change the way we operate in life. Sometimes it gets a bit repetitive but overall it drives the point. I highly recommend this book!

View all my reviews

Book Review – Vehicles: Experiments in Synthetic Psychology

Vehicles: Experiments in Synthetic PsychologyVehicles: Experiments in Synthetic Psychology by Valentino Braitenberg

My rating: 4 of 5 stars

I have never read a book similar to this one! It’s amazing and will make you think and wonder at each step.

The book consists of following two parts:

In the first part the author shows how to build autonomous bots using plausible electrical / mechanical design. These bots resemble to small robotic vehicles. The initial designs are simple and give the basic idea to the reader. And every new vehicle builds on a previous one and adds more features and complexity. The most curious feature of these vehicles is that although they are purely mechanical, their resultant behaviour is similar to human emotions like love, fear, aggression etc. At one point the author even argues that these vehicles can ‘get’ new ideas or even follow a sequence of thought but sometimes it is not very convincing.

Many parts of the book throw light on current day artificial intelligence and machine learning concepts like Turing machine, prediction, classification, concepts similar to gradient descent, correlation vs causation etc.

One of the main lesson of this book is “the law of uphill analysis and downhill invention” which is highlighted multiple times. It means that it is comparatively easy to design a system having some desired characteristics (downhill invention), on the other hand given a system as a black box it is very difficult to come up with its exact design details (uphill analysis). This becomes evident in numerous vehicle designs in the book.

In the second part of the book, the author links back to research in biology which is aligned to the concepts discussed in the first part of the book. This part might be tricky to understand specially for people with non-bio background. Even I just skimmed through this part.

Overall, it’s a very intriguing read! However, it is a bit difficult to understand and might require multiple reads through some sections to get it.

This book also makes me wonder that if these simple designs can lead to such complex behaviour in these vehicles then our brain and associated system is way more complex than than and it is hard to even imagine the kind of resultant complex behaviour it leads to in humans (which it actually does)! Also based on uphill analysis, given a specific human behaviour, it is hard to find how exactly it pans out in the backend system of our body.

View all my reviews

Thought Experiment: Deep Learning on Life

Quick Refresher on Neural Networks
A feed forward neural network (referred as NN in future) has three types of layers: input layer, hidden layers and output layer. There are two main stages of using a NN: training stage and testing stage. In the training stage, the input is fed via the input layer and then it passes through multiple hidden layers (that’s why it’s called deep) and finally through an output layer. At the output layer, we get the output of NN for the given input and we also have the corresponding expected output. The difference between expected or true output and computed output is called ‘loss’ and it is backpropagated through the NN and the weights of the hidden layers are updated so as to minimize the loss in future. This process is repeated multiple times to train the NN. Then at test stage, a new input is fed to the NN and we get the corresponding output.

Applying Deep Learning and Neural Networks on Life

life_NN

I thought it might be an interesting thought experiment to try applying a basic Deep Learning Neural Network model on our life! (Probably effect of too much Deep Learning 😀 ) The overall idea is shown in the figure above. All the inputs from the five sense organs (things we see, hear, taste, smell and touch) are transformed and fed as input to the NN. The deep hidden layers further process this information. At the output end, the output is transformed into multiple output metrics like happiness, satisfaction, peace of mind, monetary benefit and many more. Our life’s NN is mostly in training mode. Each and every experience is transformed and fed into the input layer and then it passes through hidden and output layers. For every experience, we have its corresponding expectation called the output vector. Our expectations have a different score for each of the different output metrics. And the difference between expectation and output called loss is backpropagated to train the NN. Thus our model is trained by the experiences of our life so far. Every experience is a training sample. At test time, we give the test experience as input to this trained model and take some major/minor decision based on the output which affects the experiences (future inputs) which we will face in future. Thus the decisions we take in our life, at any point, are based on the experiences we have had in our life so far and our expectations of each of those experiences.

The usual Gyaan derived from NNs:

* Sometimes we think some people are crazy when they take certain decisions in life, however it is wrong to judge others as we can never put ourselves in other’s shoes firstly because we don’t know what all things they have experienced in their lives and secondly because we don’t know what exactly are their expectations from their own lives. So it makes sense to stop judging others!

* Sometimes, our own decisions may seem crazy. That might be because the weights of our output metrics (expectations) might be significantly different from the usual weights of other people, and in such case, it makes sense to embrace our crazy self!

* Training our life’s NN, just in itself is very hard task! And then if we start taking other people’s expectations into account while training our NN, it is very difficult to attain a stable convergent model. So it makes sense to live our life and make decisions in our life based on our own terms and stop caring about the char log (random strangers)!

* If we have too high expectations in our life, then mostly the loss is also higher than usual and the model is frequently updating. These frequent updates require more energy. So it makes sense to be prepared and to practice to stay strong in order to efficiently keep training the NN in case one has very high expectations from life. Similarly having no expectations doesn’t make sense, because in that case, the model does not have any guidelines. Thus it helps to know and set our expectations in advance.

* It is difficult to train NN with little data. As we experience new experiences, we train our NN better. So even in case of our life’s NN, acche din (good days) are coming as we keep experiencing new experiences!

* Sometimes if you feel that life is screwed up (you are stuck in a local minima or a saddle point) and the NN of life is not doing any good, it is ALWAYS possible to make some changes in the parameters so as to resume effective training of the NN. It will be hard but not impossible. The loss function will always have its ups and downs. Stay strong, have hope and don’t give up!

* At any point in time, every person is at a different stage of training their NN. Also every person has faced different experiences and has different expectations. Structurally and parameter wise also, the NNs will be very different. So it does not make any sense to be competitive and compare each other! It makes sense to share strategies which help in training the NN better. Life is actually NOT a race! Cooperation is better than competition!

* If you find mentors who genuinely care for you, then preserve and nurture those relationships, they can guide and motivate you in difficult times. Same thing applies to real friends and family, they can be your pillars of support. It makes sense to learn to network and build lasting meaningful relationships. (Networking can be hard for introverts like me, but then it is all about quality not quantity!)

* Our education is like initialization and running few initial epochs to setup the NN of life. Initial conditions are very important. Also the closer our education is to real life, the better prepared we will be in future! Stop memorization and rote learning.

* Just like high end GPUs can significantly improve the training speeds as compared to normal CPUs, similarly we can make the life’s NN train much better if we maintain good health. It makes sense to eat healthy, exercise and stay fit.

* Sometimes if we compare two consecutive iterations, we might not notice any significant change which in turn might make us doubt ourselves and our abilities and we might question if anything is truly happening? are we on the right track? and so on. It is very rare to get great short term results. It makes sense to be patient and focus on long term results.

* Even if you meet or exceed expectations on one of the output metrics while the other metric scores are significantly low (for example your experiences have a high monetary value and low score of peace of mind or vice versa), in that case, overall the loss is going to be very high i.e. things are going to be hard. It also shows that lopsided expectations and outcomes are bad and it makes sense to work towards holistic, all round development.

P.S.

* Ofcourse real life is a lot more complex and no model can exactly mimic it. This NN model is also an attempt to model some of the significant features of life.

* I could have titled this post as Feed Forward Neural Network on Life but then Deep Learning on Life sounds way cooler! 😀