Intellectual Property From 2008 to 2019

We need General AI to solve issues and problems that are not to complex for humans to solve but just too vast.  Computers can crunch data way better than any human.  Now with Artificial General Intelligence available right now if we only network our narrow AI's together (see below) we can achiever more in the next ten years that we have achieved in the history of mankind. 

Picture Recognition

I started thinking about General Picture Recognition in the early 1990's and had always considered whether it was possible to create software that up until late 2000 was always concerned with recognition of a particular thing using mathematical points mostly.  For example using many different points on a face to try and identify that face.  But while all this was starting to take off I was already thinking about how every single object could be identified within a picture and the context and relationship of each object to that picture or another way of putting it how it was positioned within any one scene.

About 2008 I put my first attempt of General Picture Recognition onto the Internet with software I called General Picture Recognition Software General.   Since that time everybody including companies like Google have started to believe that what would have once been considered impossible that a computer could recognize any object within any picture may not be as impossible as first thought.

We have developed some interesting concepts in "General Picture Recognition" and they are listed below. Because we have noticed the proliferation of American Patents in Software and in particular picture recognition Patents that are not the same in the UK. We therefore have listed our intellectual property rights below.  

1. Auto Tagging using our unique concept of something similar to Neural Networking concepts but very different.  This concept does the following it compares two very similar whole pictures and allows each picture to be manually or automatically tagged by the user.  Once we have manually tagged a number of pictures we then allow our picture recognition software called GPRSG Software to take known information from a picture that is already tagged and then use that picture to identify similar pictures and tag that image.  This process is iterative the more images tagged the more images can be tagged given a selection of already tagged images.  This concept belongs to us and should not be copies or used in any other software product without our consent.

2 Colour interpretation uses our unique concept to remove all background within a photo and allows the selection of only the main object through colour selection taking any colour range and incorporating the whole colour range into a single colour.  This concept allows us to pull out an object from any picture and without the need to know every rotational shape using colour alone we can identify an object within reason of any size within any picture. We call this our "User Defined Custom Colour Object to Search Analysis" because by the user selecting colours within an object it will allow them to find the same object in any other picture even if there is a change of size and shape because of any sort of rotation.
 
Example



3. Object storage uses our unique concept that allows object to be stored in their own file and tagged so that any picture can use any object within any file to build up a description of that picture so that the picture can be described in its own vocabulary or language using a simple word or phrase search with tagged to file; with in-depth descriptions and links.  Therefore a file may hold an object called helicopter and that object file is tagged internally with the description and linking mechanism and file content that allows any object in any other image to be compared with the object in the file containing the object. We use the same colours to identify the same object in other pictures even if the colour changes because of the way light hits that object when it is in different positions because of any rotation of that object vertical or horizontal or both. 

4)  Using a method that outline a change of strong colour or when one colour is changed to another colour.  We have a unique method for achieving the outline we produce.

5)  Multiple methods to identify what one picture contains.  The main concept is to identify the difference automatically between an object and what is actually background.  We identify background by working out what is an object.  Objects always being something that is between the background and each background is a layer of the object.  Using this method we can work out what is actually the object and what is the background using software alone. 

New Ideas on Neural Networks

The traditional approach to Neural Networks is creating Nodes that represent Neurons.  Each Neuron can be changed mathematically until the concept works.  My new theory is only different in the following way what if the computer "CREATES A NETWORK OF NEURONS USING DATA IMPORTANTLY NO NEURONS EXIST UNTIL THEIR CREATED USING DATA)  this is something that I am currently working on. I started this work in 2008 but came back to the idea in 20017 however have been playing around with these idea since early 1990's.  If it is possible to create a computer programmed Brain using data then maybe we will be moving further towards general AI or general picture recognition.  What is important in this new concept is not to have Nodes or Neurons created by the programmer that can be acted on by working out the lowest error rate to achieve an action like the traditional method.   What if the data takes over this operation and only the data is changed when an error in the data is found.   Once found the data is updated and we then have a computer that now acts like a human Brain.  This would also eliminate one big issue with current thinking on AI and that is that all of these current Neural Networks need a lot of training and when they are wrong nodes or neuron weightings need to be changed or more neurons need to be added to the system.  This is certainly not like the human brain where new connections to Neurons and new Neurons seem to be created through life.   In my theory of general AI the program that acts on the data is equivalent to the brain structure and operates are differently depending on the structure, for example speech, hearing or seeing.  In my new theory the current thinking on Nodes or Neurons are replaced with data that importantly represent the original data. The best AI systems will be the system that are designed with a programmed structure that best implements the data as neurons.  The most important aspect of this type of data driven neural network system is that the design is general.  The big advantage of this is that the brain structure can be programmed and improved over time so it continues to get better.  But the brains patterns are not created through code but is pure data. The way the data is stored, analyzed and retrieved can use traditional programming languages but the data that makes up the neurons mean that we have true Artificial Intelligence.  Decisions made by the system will depend on how the data driven neural network nodes are organized.  This organization will be critical to the decisions the AI brain will make.   What I have not made clear is that this data can change over time to put right wrong decisions as the brain learns and new neuron or neuron connections are made.  Being able to change a data driven neural network will be critical for such systems to work well.  Therefore as the AI brain capacity increases the AI system should learn more and be able to rectify mistakes in data made when it did not have as much knowledge.  This I would think is a mirror to how a human learns.   For a simple example if I learn to spell "speach" like this although it is spelt wrong because everybody spells it like this "speech" until I realize that I have spelt it wrong I will always spell it that way. However once I use a spellchecker I realize that I have spelt it wrong my brain then swaps out speach and creates speech.  To do this it usually moves speach to a lower priority and supersedes with speech being higher priority and therefore "correct at the moment."  This is the key to learning that of "currently being right" this may change over time when new data is learnt.  This is important because the human brain like a good AI system must have a brain that can change opinion over time. This is the area of my current research that I am interested in. I believe that such data driven neural network systems may take computers far beyond current human intelligence in that they will be very much better than humans in very many areas.  These new AI system will even be able to produce or create theories like Albert Einstein's thought about but done using data driven neural networks.  What I think will be interesting is will humans be able to understand why such theories are being defined and how those computer driven theories can be implemented.  Will the human be intelligent enough to interpret such computer defined theories, I think I will leave it as an open question to be discussed by others.

All of these concepts are our intellectual property rights and these ideas should not be reproduced in any other software without our permission.

General Artificial Intelligence

Maybe we already have it, maybe its more about linking up that intelligence and maybe that is the big problem.  Maybe its company competition that is the issue.  All that Artificial Intelligence is; is computer code and data.  Many problems have already been solved for example I understand that object recognition, is mostly solved or is as good as human level object recognition.   Therefore if you had a way of passing the problem around to the AI that deals with that problem then job done.   WE NEED A SET OF SIMPLE  GENERAL STANDARDS THAT EACH AI CAN RECEIVE AND SEND.  PASSING ON PART OF THE PROBLEM ON TO ANOTHER AI, THAT DEALS WITH WHAT IT IS GOOD AT AND PASSING THE OTHER PROBLEMS ON.  The mechanism for passing on the problem is really the only thing that is stopping general AI right now.   We have robots that can walk we have algorithms that can see objects. what if we just passed data between algorithms. It does not matter how complex the processing is as long as the input output data or algorithms to be passed follows a universal standard that is very simple so that anybody creating an AI can easily follow those standards so that their AI can become part of the general artificial intelligence.  It also solves the so called control problem, because no one person or group can control all parts of the AI.  This idea is the intellectual property of myself and GPRSG.

One main concept or aspect of creating these STANDARDS is something I have defined as GAIAG or general artificial intelligence agreement general.  What does this mean, today we have many people working on many different A.I problems and many working on the same A.I problem.  These algorithms are getting great results but its been pointed out that not all results are perfect.  Take natural language what if you have several natural language A.I algorithms that could take results from each other and form a consensus. For example if five agreed and two disagreed then it would go with the five natural language A.I's that agreed.   If all disagreed with each other it would do exactly what we would do and ask a human for the answer, that answers would then be feed back to all the A.I's so that they could learn and improve.  This feedback loop is important because it could allow A.I's that had the wrong answer among other A.I's for example natural language learning to be improved.  My concept of general artificial intelligence agreement would mean that one A.I can pass its results onto another A.I using; receive and send general standards.   Say you have several A.I's designed by different people or companies they all sign up to the GENERAL A.I STANDARDS once signed up and on the network they would form a comprehensive insight to a particular area for example our natural language results if you had 20 A.I's designed for natural language that looked at the same problem. Remember this is general A.I therefore if the problem was object recognitions the General Artificial Intelligence would just pass that object (photo) or (video) or (sound) to the set of A.I's that deal with that issue.  Results are returned after General Artificial Intelligence Agreement within a particular area back up to the main General Artificial Intelligence Brain General or GAIBG for short.  This combined information is feed back to the USER who maybe anybody on the internet.  Future expansion means that the STANDARDS FOR SENDING AND RECEIVING INFORMATION MUST BE SEPARATE FROM ANY ONE A.I INPUT OR OUTPUT.  What I mean is that its the job of a person or company who creates an A.I to translate its input or output to conform with the GENERAL ARTIFICIAL INTELLIGENCE AGREEMENT GENERAL, and to those standards. Also with this paradigm redundancy is not an issue new A.I's in the same area or importantly new A.I algorithms created for the first time in completely new areas would act like a new skill that any human would learn. Some A.I systems may fall over or stop being developed. Like with any node system if one node falls of the system it will just jump to the next A.I. However for such an A.I system to function large A.I systems like Googles speech recognition would have to be available.  Therefore for such an A.I system to work well you would have to have the combined co-operation of big companies like Microsoft, Google and Facebook to name just some.    This general A.I system I am discussing here would not just deal with data driven issues but for example with powerful General Artificial Intelligence Agreements General, these standards could mean that a Robot made from any material and hardware would just have to SEND and RECEIVE information from and to the General Artificial Intelligence Brain General that would feed back everything that Robot would need to move, walk and talk in the real world.  For example if the robot is looking at something, that data stream would be sent to the General Artificial Intelligence Brain General or GAIBG for short. it would be sent to the set of A.I's that deal with video and interpreting objects in real time, at the same time the sound would be sent to the A.I's that deal with understanding sound and interpreting speech or sounds these would be sent back or even sent to a set of A.I's that deal with combining sound and picture data to give and send back to the robot via the  General Artificial Intelligence Brain General results.  Therefore the robot can importantly (think) what I am seeing for example is a room with seats, tables and chairs but what I am hearing is the sound of the sea through an open window.   This information can be bounced back and forth to the General Artificial Intelligence Brain General to maybe a set of A.I (thinking) algorithms. So if the robot is asked the question what are you thinking? The answer would be, I am thinking about the room and listening to the sea through the open window.  Maybe the next question from a human is how can I use the space in this room better.  This leads to the final aspect of the GAIAG, and its standards, algorithms that are not considered to be A.I as long as they comply with the A.I agreed standards can still form part of the A.I system.  For example an algorithm that uses space aware mathematics to produce the best layout for a room may use a standard algorithm. But the data is feed back up to the A.I Brain so that the person who asked the original question about using the room space better would get a reply from the robot like let me print you an alternative room plan that better uses the space within this room. You may reply OK or No, if OK the Robot may connect to the printer and prints the improved plan. Ultimately if we are still dealing with robots we could say something like, Robot can you run down to the local shop and pick up a pint of milk for me.  The robot may ask which shop, having used the standards to connect to the General Artificial Intelligence Brain General. The human may reply to Tesco or Asda etc. The robot would get the latitude and longitude from the General Artificial Intelligence Brain General and then proceed to the shop to pick up the milk.  One important aspects of the General Artificial Intelligence Agreements General, the standards is that not only data but code or algorithms can also be requested. For example a Robot may not be able to be connected to the Internet at all times therefore core A.I algorithms can be run actually within the Robot without any Internet connectivity but connection to the General Artificial Intelligence Brain General can be made at any time when and if required.   In summary the system I am proposing does not have to be an all singing and dancing system all at once, it should be flexible and allow for any future technology.  The standards must allow for any new technology and any current or new types of programming language or hardware like quantum computers.  Therefore the standard must be very separate from any software or hardware.  It will be down to the software or hardware to comply and produce input and output that complies with these standards.  The input data and output data to and from each A.I therefore would have to be independent of any data type it would just be a data stream.   Therefore one very useful A.I would be a stream identifier.  This would learn about the data streams so that the General Artificial Intelligence Brain General could send that data to the correct set of A.I algorithms that deal with that data. This idea is the intellectual property of myself and GPRSG. If anybody like Google, Facebook or Tesla would like to contact me with the idea of developing these standards into a working model I would be interested to help any large teams on my idea for the development of A.I General Artificial Intelligent standards.  One final thing that is very important in my opinion and being a computer programmer myself.  Input and output should be a single line of code that can be placed in any programming language. The longer I program the more I realize that keeping it simple and easy for all is the key. 


For example it should be just one line of code :

Receive stream = anything(send stream, any parameters required by standards)

The Sender and Receiver Stream format can be defined in a set of parameters that the General Artificial Intelligence Brain General can understand and also those parameters can be set so that a certain format is returned within the Receiver stream. so it can be used by the local algorithm.

Finally because these standards are completely separate of hardware and software requirements system like IBM, DEEP MIND and all other A.I systems can all work together for the good of all given the standards and idea proposed.  With these standards we may have true General Artificial Intelligence General or GAIG within a couple of years.

This idea is the intellectual property of myself and GPRSG.

Machine Learning a Simple but very Real Idea

I would like to make it clear current computer materials will never be self aware.  If you made the systems biological then we are biological therefore in theory anything is possible.  But silicon will never be self aware on its own, but biological and silicon unknown.

But systems that imitate human behaviour is very possible.

I am going to be very controversial in what I am about to say but this just maybe the truth, what if machine learning or general intelligence can not be mathematically proved.  We have always associated mathematicians with thinking.    Albert Einstein for example was good at math's but I would say was an amazing thinker. What if thinking not Math's answers the question.   What if we can simplify machine learning, what if HL = (N - Y), what if this is the only equation you need to work out general intelligence. HL stands for human level intelligence N stands for No and Y stands for Yes.  Intelligence is one single decision at a time either Yes or No.  The minus sign is important because all previous No's that do not exist or exist give rise to a possible Yes and all Yes's that do not exist or exist give rise to a possible No.   What if our brain is simply a parallel decision maker that produces one thought at a time.  Put another way, I may think that a Bishop in chess can move only in straight lines, Yes this is my truth, but then having watched many people play chess I observe that Bishop moves diagonally.    Therefore No a Bishop does not move in straight lines but Yes it does move diagonally, I have learnt.   The idea of big data is therefore "True" but we have made a fundamental mistake.  Humans get big data through the whole of their life, we see, we hear sounds we smell, we touch and feel objects to get texture.   Our learning is continual, therefore when we see something we have usually seen that a thousand times, when we hear something we have usually heard  that noise lots of times so it is familiar.  That was a dog barking Yes we think, but if it is a noise we have not heard before we think No, its not a dog or wolf or cat so we eliminate all known possibilities.  We may never know what that noise is because (N - Y) have eliminated all possible ideas.  Now two things happen we wait for the noise again and use new information to make sense of that noise.  We walk by a paint pall center near my new house. Next time I hear the noise I associate it with a paint ball being fired from a paint ball gun. Hear it again I say Yes its a paintball or someone tells my what the noise is for example I talk to my new neighbour and they explain the sound, he then tells me it is a paint ball gun being fired. 

Each human can only learns so much I may understand about computer programming but I do not know about building a Television.  Therefore general intelligence and super intelligence is already achievable.  The human brain is only asking the same simple question, is it Yes or No.  Shall I get out of bed, shall I brush my teeth, shall I wash my hands, shall I comb my hair, Yes, No.

General intelligence and super intelligence, I have written a program that does POS  (Part of Speech) works well but then connected it to other programs that do POS and the system is already way better than I at identification of POS.   I just asked the different programs if they agreed and if so update POS data set.  Each system on its own got some POS wrong but when you combine them and allow them to learn from each other we now have super intelligence in the POS domain.  Super intelligence is just a matter of linking all of these systems together (See Above Talk on networking these systems)  from system that analyse the voice to system that hear to system that talk to systems that read and many more.

The next challenge is not just creating more sophisticated  machine learning programs (systems) but HOW WE NETWORK THESE SYSTEM TOGETHER SO ALL CAN TALK TO EACH OFTHER AND PASS DATA BETWEEN EACH OTHER.  Then we will have super intelligence.  The super intelligence will not be something that destroys the world but will be something that humans can use to improve the world I hope if it is a distributed network. 

Machine learning systems have more power than we really yet understand, if you combine and network these systems, they will be able to answer and achieve technologies we have not yet fully realised.  If we network these system to talk to each other they will grow faster and be more powerful than we could have ever thought possible.  This is not fiction but reality, we under estimate this technology at mankind's peril.

Join what we now call narrow AI to other narrow AI's in a network of different types of narrow AI and you will see beginning of the future I see.  The next break through in my opinion will be a breakthrough in joining different AI systems together, once this is achieved you may start to get super intelligence faster than you think.

By joining narrow AI systems (programs) together will lead to computer systems understanding the real world and unless they understand the real world, they will always be less than they can be. To achieve modeling the real world we will need technologies that provide data to any part of the AI system of collections of narrow AI with much more data that we currently provide.  Technologies and chips that can read different smells, associate heat with objects and feed importantly that data back into the AI system and much more.

The brain stores data - computers store data - therefore if we think of each narrow AI as acting as a group of neurons and each narrow AI gets and passes data onto other neurons on the Internet anywhere else in the world. Input equals data stream, with Yes Agree or No Disagree, if all AI's agree or majority agree then Yes confirmed is a Car or confirmed is a dog barking or confirmed is the smell of fire.  Once we have this level of modeling the world we will soon have super AI because it will be better than humans in most every domain humans have ability within.

What makes this very interesting are narrow AI  neurons programs that will accept input from many narrow AI neurons lets call these general AI neurons that will have the ability to analyse many types of data and many of these general AI neurons may collectively produce new concepts and new ideas that it would be impossible for any human to have enough data to formulate these new theories.

These theories can be given to a lot of very clever humans who may imagine new technologies or cures.  This is the future of AI I now see and envisage.  Who can stop this future, well surprisingly business who refuse to pass data between the AI systems or allow their AI system to talk to other AI systems. 

To all AI developers the most important point I can make is that data from vision, sound, taste, smell and touch are stored differently within the brain. Its that stored data that the brain works with not the external data.  Put another way you see the Mirror on the wall. You close your eyes and can still imagine the mirror on the wall that's the brain working from stored data because you know longer see the real mirror because your eyes are closed.  Its the stored data we need to pass between Narrow and General AI Neurons.

The General Artificial Intelligence Brain General is the most important node on the system, it is the node that provides human like thought, or makes a decision once all notes feedback to it, their results just like the human brain with neurons.  All bits of information sound, vision, taste, smell etc. for a single spatial time slice would be allocated their unique code, also the  General Artificial Intelligence Brain General main node would have a copy of this unique number and therefore could make sense of all sub nodes that are passing that spatial time slice of information after processing back to it.  Once this is achieved we then have General Artificial Intelligence working at the same level of the human brain, its only a case of sub nodes improving to bring us to Super Artificial Intelligence what I mean by this is that a machine can do any human type process (intellectually) better than humans. 

There is nothing wrong with using neural networks, and a combination of other types of networks to achieve a imitation of the human brain, as long as they are connected to each other; my proposed network.  Time will go by and better methods maybe found to improve the General Artificial Intelligence Brain General sub notes.  For example back propagation within neural networks may no longer be required to train a set of neurons.  The system may see inefficiencies and self improve.

Can we create such a General Artificial Intelligence Brain General today?  The answer is YES if all interested parties work together.  There is nothing wrong with companies using thousands of GPU'S AND CPU'S but I have developed a self learning data driven neural type network connected to other neural networks that works well on my home very ordinary laptop.

It can process up to 5000 lines of text within about 2 minutes at its fastest speed but this is reduced when it is in learning mode. In learning mode it is actively attempting to learn new POS or (Part of Speech) but if it is just using already learnt speech yes it can run and analyse text much faster than any human could possible achieve.

Current barriers I have encountered, there are very few others who are willing to share their POS data from their machine learning systems without wishing to charge a fee. The small amount of organizations that do usually have limited POS data. I  would not be surprised if I do not have some of the better POS data sets through my AI training.   But organizations that have huge POS data sets are just not going to share those data sets because they are worth to much financially to openly share their data.  Actually they use data to feed their AI systems so sharing that data would not make sense for them.

There is a difference between companies using tools that access data via some sort of neural network and  say giving everybody that data to help create new neural networks systems.  It will just not happen so the best solution is creating a neural network made up of many different types of narrow neural networks that can in compose any other self learning software even if not based on a neural network. The main idea as above based on sub networks in the same area agreeing (discussed above).

Back to my main point, you do not need big computer or the cloud to build your own SELF LEARNING PROGRAM, I have shown that you can build your own.  However these programs are not easy to build and take a huge amount of time to build a good narrow AI program.  But once built they achieve what no normal computer program can achieve.  Having built one myself I really  can see why AI programs are way more powerful than normal type programs.  I can not get over how they self learn, it sometimes feels like these programs are living systems in that they keep on improving the more data they get. 

I am interested in legal data, I am interested in the idea that if you need AI to analyse legal documents from Acts to Case Law to Journls or discovery document type data, these can be bunched up and pulled out and provide the same relevant legal point from many documents that even a well trained solicitor or lawyer would not consider. If we could do this in say 10 minutes this could save someone spending days or even weeks attempting to find that same data.  Even if you considered it in another context what if a 10 minute search found important information that a lawyer missed after they did their legal research or discovery   On average computer hold large amounts of un-organised data.  Interestingly I have found legal information I did not no about and I would think my legal knowledge is quite wide.  Case law means you can home in on cases you would never have considered or which are not precedence but really give a quick understanding of why that precedence for a case evolved from or an idea that improves on or may change precedence for a certain set of facts.

Why not do a simple traditional search, the main issue is context, this can not be improve using a traditional search, with AI we are attempting to pull data in with similar context , also these results must be returned first and bunched together.  The more learning the AI does the better the context and better results.

What I have been working on is the idea, that although we can input large amounts of information. Learnt data set representing brain memory is only 71,950 KB currently on my biggest data set of learnt data.  The human brain when it sees a car it does not go through every car you have ever seen in your entire life to say that it is a car, this would take too long even if your brain is doing a type of parallel processing.  This is one of my big thoughts on reducing AI data down from huge data sets to much smaller data sets.  We only need to find the closest match and put those close matches together to have much smaller datasets.  I think AI will have to move in this direction, much smaller data sets mean much quicker time and results. 

One downside of any AI system is that it will need to learn,  Another way day 1 of using an AI will give no better results than a normal search, it will know nothing.  After day one it should always be better than a normal search.

All AI systems are really search systems, from self driving cars to NLP or natural language processing they all do the same, some do their search using a hidden mechanisms for example self driving cars search against patterns, while NLP may search to translate language or speach.  It is not surprising human vision, smell taste is a searched representation of firing patters of neurons that at the highest level is a YES or No option.  We may colour it using different language but its just a single thought that makes a decision at any single point in time and space.

This idea is the intellectual property of myself and GPRSG, if you use any of these idea in any talks please acknowledge your source as (GPRSG).

This next topic is over the top but may well be true, what if parallel thinking uses some type of quantum relationship in the brain.  If time space and parallel thinking spark thought (all done at once theory) maybe this is why it must be biological to have true thought.

Computers may come close to thought but will always be an imitation because  they lack true parallel thinking.  Like nothing so far breaks the speed of light test what if the parallel thinking to produce a thought test only applies to biological entities.


THE I KNOW TEST

The I know test is an important connection concept, I know and pass it on and the I do not know and pass it on.   Let me give an example of this idea.  I am sitting in dark light but I can touch type I press the wrong key, I observe that I have pressed the wrong key therefore I know I have pressed the wrong key so one part of the brain sends a message stating that I have pressed the wrong key and the key I need to press is next to the wrong key that I actually pressed.  The final message is sent that adjusts my finger to move one place over so I press the correct key.  Without the I know Test we cannot pass the message on to the correct place. 

THE DOUBLE TAKE TEST

This is what I think the brain in doing in humans, but if not its the best way for a computer to do it to get near human results.  I think all information is grouped in the brain, I am not saying it is grouped in closeness or any other definable way however it maybe to some extent. But what I am sure about is that the brain if its as random as my brain can do this, Train equals track and many other things.  Track equals train (the double take test) and many other things. What is nice about this is that it is really easy to do in computer code.  It is one of the easiest ways of allowing a computer to think, given a small amount of data initially.  Trains can be associated with passengers so equal to passengers.  Therefore trains run on tracks and have passengers.   We are building a way of identifying words with areas of thinking.  

What is amazing about these methods is that all this can be done on a standard laptop.  This is because we are not working with large data sets, just lots of very small data sets, that grow and learn just like we do.

Modeling the World

Modeling the word is just stored data, so we do not have to do the whole prediction on the fly. Modeling the world is not the whole answer but it is the search technique used with modeling the word is moving closer to answering certain questions of general intelligence.

All this is good but just connecting all the current narrow AI solutions together on its own will be so very powerful.

This idea is the intellectual property of myself and GPRSG, if you use any of these idea in any talks please acknowledge your source as (GPRSG).
General Picture Recognition Software