Intellectual Property From 2008 to 2017

I started thinking about General Picture Recognition in the early 1990's and had always considered whether it was possible to create software that up until late 2000 was always concerned with recognition of a particular thing using mathematical points mostly.  For example using many different points on a face to try and identify that face.  But while all this was starting to take off I was already thinking about how every single object could be identified within a picture and the context and relationship of each object to that picture or another way of putting it how it was positioned within any one scene.

About 2008 I put my first attempt of General Picture Recognition onto the Internet with software I called General Picture Recognition Software General.   Since that time everybody including companies like Google have started to believe that what would have once been considered impossible that a computer could recognize any object within any picture may not be as impossible as first thought.

We have developed some interesting concepts in "General Picture Recognition" and they are listed below. Because we have noticed the proliferation of American Patents in Software and in particular picture recognition Patents that are not the same in the UK. We therefore have listed our intellectual property rights below.  

1. Auto Tagging using our unique concept of something similar to Neural Networking concepts but very different.  This concept does the following it compares two very similar whole pictures and allows each picture to be manually or automatically tagged by the user.  Once we have manually tagged a number of pictures we then allow our picture recognition software called GPRSG Software to take known information from a picture that is already tagged and then use that picture to identify similar pictures and tag that image.  This process is iterative the more images tagged the more images can be tagged given a selection of already tagged images.  This concept belongs to us and should not be copies or used in any other software product without our consent.

2 Colour interpretation uses our unique concept to remove all background within a photo and allows the selection of only the main object through colour selection taking any colour range and incorporating the whole colour range into a single colour.  This concept allows us to pull out an object from any picture and without the need to know every rotational shape using colour alone we can identify an object within reason of any size within any picture. We call this our "User Defined Custom Colour Object to Search Analysis" because by the user selecting colours within an object it will allow them to find the same object in any other picture even if there is a change of size and shape because of any sort of rotation.

3. Object storage uses our unique concept that allows object to be stored in their own file and tagged so that any picture can use any object within any file to build up a description of that picture so that the picture can be described in its own vocabulary or language using a simple word or phrase search with tagged to file; with in-depth descriptions and links.  Therefore a file may hold an object called helicopter and that object file is tagged internally with the description and linking mechanism and file content that allows any object in any other image to be compared with the object in the file containing the object. We use the same colours to identify the same object in other pictures even if the colour changes because of the way light hits that object when it is in different positions because of any rotation of that object vertical or horizontal or both. 

4)  Using a method that outline a change of strong colour or when one colour is changed to another colour.  We have a unique method for achieving the outline we produce.

5)  Multiple methods to identify what one picture contains.  The main concept is to identify the difference automatically between an object and what is actually background.  We identify background by working out what is an object.  Objects always being something that is between the background and each background is a layer of the object.  Using this method we can work out what is actually the object and what is the background using software alone. 

New Ideas on Neural Networks

The traditional approach to Neural Networks is creating Nodes that represent Neurons.  Each Neuron can be changed mathematically until the concept works.  My new theory is only different in the following way what if the computer "CREATES A NETWORK OF NEURONS USING DATA IMPORTANTLY NO NEURONS EXIST UNTIL THEIR CREATED USING DATA)  this is something that I am currently working on. I started this work in 2008 but came back to the idea in 20017.  If it is possible to create a computer programmed Brain using data then maybe we will be moving further towards general AI or general picture recognition.  What is important in this new concept is not to have Nodes or Neurons created by the programmer that can be acted on by working out the lowest error rate to achieve an action like the traditional method.   What if the data takes over this operation and only the data is changed when an error in the data is found.   Once found the data is updated and we then have a computer that now acts like a human Brain.  This would also eliminate one big issue with current thinking on AI and that is that all of these current Neural Networks need a lot of training and when they are wrong nodes or neuron weightings need to be changed or more neurons need to be added to the system.  This is certainly not like the human brain where new connections to Neurons and new Neurons seem to be created through life.   In my theory of general AI the program that acts on the data is equivalent to the brain structure and operates are differently depending on the structure, for example speech, hearing or seeing.  In my new theory the current thinking on Nodes or Neurons are replaced with data that importantly represent the original data. The best AI systems will be the system that are designed with a programmed structure that best implements the data as neurons.  The most important aspect of this type of data driven neural network system is that the design is general.  The big advantage of this is that the brain structure can be programmed and improved over time so it continues to get better.  But the brains patterns are not created through code but is pure data. The way the data is stored, analyzed and retrieved can use traditional programming languages but the data that makes up the neurons mean that we have true Artificial Intelligence.  Decisions made by the system will depend on how the data driven neural network nodes are organized.  This organization will be critical to the decisions the AI brain will make.   What I have not made clear is that this data can change over time to put right wrong decisions as the brain learns and new neuron or neuron connections are made.  Being able to change a data driven neural network will be critical for such systems to work well.  Therefore as the AI brain capacity increases the AI system should learn more and be able to rectify mistakes in data made when it did not have as much knowledge.  This I would think is a mirror to how a human learns.   For a simple example if I learn to spell "speach" like this although it is spelt wrong because everybody spells it like this "speech" until I realize that I have spelt it wrong I will always spell it that way. However once I use a spellchecker I realize that I have spelt it wrong my brain then swaps out speach and creates speech.  To do this it usually moves speach to a lower priority and supersedes with speech being higher priority and therefore "correct at the moment."  This is the key to learning that of "currently being right" this may change over time when new data is learnt.  This is important because the human brain like a good AI system must have a brain that can change opinion over time. This is the area of my current research that I am interested in. I believe that such data driven neural network systems may take computers far beyond current human intelligence in that they will be very much better than humans in very many areas.  These new AI system will even be able to produce or create theories like Albert Einstein's thought about but done using data driven neural networks.  What I think will be interesting is will humans be able to understand why such theories are being defined and how those computer driven theories can be implemented.  Will the human be intelligent enough to interpret such computer defined theories, I think I will leave it as an open question to be discussed by others.

All of these concepts are our intellectual property rights and these ideas should not be reproduced in any other software without our permission.

General Artificial Intelligence

Maybe we already have it, maybe its more about linking up that intelligence and maybe that is the big problem.  Maybe its company competition that is the issue.  All that Artificial Intelligence is; is computer code and data.  Many problems have already been solved for example I understand that object recognition, is mostly solved or is as good as human level object recognition.   Therefore if you had a way of passing the problem around to the AI that deals with that problem then job done.   WE NEED A SET OF SIMPLE  GENERAL STANDARDS THAT EACH AI CAN RECEIVE AND SEND.  PASSING ON PART OF THE PROBLEM ON TO ANOTHER AI, THAT DEALS WITH WHAT IT IS GOOD AT AND PASSING THE OTHER PROBLEMS ON.  The mechanism for passing on the problem is really the only thing that is stopping general AI right now.   We have robots that can walk we have algorithms that can see objects. what if we just passed data between algorithms. It does not matter how complex the processing is as long as the input output data or algorithms to be passed follows a universal standard that is very simple so that anybody creating an AI can easily follow those standards so that their AI can become part of the general artificial intelligence.  It also solves the so called control problem, because no one person or group can control all parts of the AI.  This idea is the intellectual property of myself and GPRSG.

One main concept or aspect of creating these STANDARDS is something I have defined as GAIAG or general artificial intelligence agreement general.  What does this mean, today we have many people working on many different A.I problems and many working on the same A.I problem.  These algorithms are getting great results but its been pointed out that not all results are perfect.  Take natural language what if you have several natural language A.I algorithms that could take results from each other and form a consensus. For example if five agreed and two disagreed then it would go with the five natural language A.I's that agreed.   If all disagreed with each other it would do exactly what we would do and ask a human for the answer, that answers would then be feed back to all the A.I's so that they could learn and improve.  This feedback loop is important because it could allow A.I's that had the wrong answer among other A.I's for example natural language learning to be improved.  My concept of general artificial intelligence agreement would mean that one A.I can pass its results onto another A.I using; receive and send general standards.   Say you have several A.I's designed by different people or companies they all sign up to the GENERAL A.I STANDARDS once signed up and on the network they would form a comprehensive insight to a particular area for example our natural language results if you had 20 A.I's designed for natural language that looked at the same problem. Remember this is general A.I therefore if the problem was object recognitions the General Artificial Intelligence would just pass that object (photo) or (video) or (sound) to the set of A.I's that deal with that issue.  Results are returned after General Artificial Intelligence Agreement within a particular area back up to the main General Artificial Intelligence Brain General or GAIBG for short.  This combined information is feed back to the USER who maybe anybody on the internet.  Future expansion means that the STANDARDS FOR SENDING AND RECEIVING INFORMATION MUST BE SEPARATE FROM ANY ONE A.I INPUT OR OUTPUT.  What I mean is that its the job of a person or company who creates an A.I to translate its input or output to conform with the GENERAL ARTIFICIAL INTELLIGENCE AGREEMENT GENERAL, and to those standards. Also with this paradigm redundancy is not an issue new A.I's in the same area or importantly new A.I algorithms created for the first time in completely new areas would act like a new skill that any human would learn. Some A.I systems may fall over or stop being developed. Like with any node system if one node falls of the system it will just jump to the next A.I. However for such an A.I system to function large A.I systems like Googles speech recognition would have to be available.  Therefore for such an A.I system to work well you would have to have the combined co-operation of big companies like Microsoft, Google and Facebook to name just some.    This general A.I system I am discussing here would not just deal with data driven issues but for example with powerful General Artificial Intelligence Agreements General, these standards could mean that a Robot made from any material and hardware would just have to SEND and RECEIVE information from and to the General Artificial Intelligence Brain General that would feed back everything that Robot would need to move, walk and talk in the real world.  For example if the robot is looking at something, that data stream would be sent to the General Artificial Intelligence Brain General or GAIBG for short. it would be sent to the set of A.I's that deal with video and interpreting objects in real time, at the same time the sound would be sent to the A.I's that deal with understanding sound and interpreting speech or sounds these would be sent back or even sent to a set of A.I's that deal with combining sound and picture data to give and send back to the robot via the  General Artificial Intelligence Brain General results.  Therefore the robot can importantly (think) what I am seeing for example is a room with seats, tables and chairs but what I am hearing is the sound of the sea through an open window.   This information can be bounced back and forth to the General Artificial Intelligence Brain General to maybe a set of A.I (thinking) algorithms. So if the robot is asked the question what are you thinking? The answer would be, I am thinking about the room and listening to the sea through the open window.  Maybe the next question from a human is how can I use the space in this room better.  This leads to the final aspect of the GAIAG, and its standards, algorithms that are not considered to be A.I as long as they comply with the A.I agreed standards can still form part of the A.I system.  For example an algorithm that uses space aware mathematics to produce the best layout for a room may use a standard algorithm. But the data is feed back up to the A.I Brain so that the person who asked the original question about using the room space better would get a reply from the robot like let me print you an alternative room plan that better uses the space within this room. You may reply OK or No, if OK the Robot may connect to the printer and prints the improved plan. Ultimately if we are still dealing with robots we could say something like, Robot can you run down to the local shop and pick up a pint of milk for me.  The robot may ask which shop, having used the standards to connect to the General Artificial Intelligence Brain General. The human may reply to Tesco or Asda etc. The robot would get the latitude and longitude from the General Artificial Intelligence Brain General and then proceed to the shop to pick up the milk.  One important aspects of the General Artificial Intelligence Agreements General, the standards is that not only data but code or algorithms can also be requested. For example a Robot may not be able to be connected to the Internet at all times therefore core A.I algorithms can be run actually within the Robot without any Internet connectivity but connection to the General Artificial Intelligence Brain General can be made at any time when and if required.   In summary the system I am proposing does not have to be an all singing and dancing system all at once, it should be flexible and allow for any future technology.  The standards must allow for any new technology and any current or new types of programming language or hardware like quantum computers.  Therefore the standard must be very separate from any software or hardware.  It will be down to the software or hardware to comply and produce input and output that complies with these standards.  The input data and output data to and from each A.I therefore would have to be independent of any data type it would just be a data stream.   Therefore one very useful A.I would be a stream identifier.  This would learn about the data streams so that the General Artificial Intelligence Brain General could send that data to the correct set of A.I algorithms that deal with that data. This idea is the intellectual property of myself and GPRSG. If anybody like Google, Facebook or Tesla would like to contact me with the idea of developing these standards into a working model I would be interested to help any large teams on my idea for the development of A.I General Artificial Intelligent standards.  One final thing that is very important in my opinion and being a computer programmer myself.  Input and output should be a single line of code that can be placed in any programming language. The longer I program the more I realize that keeping it simple and easy for all is the key. 

For example it should be just one line of code :

Receive stream = anything(send stream, any parameters required by standards)

The Sender and Receiver Stream format can be defined in a set of parameters that the General Artificial Intelligence Brain General can understand and also those parameters can be set so that a certain format is returned within the Receiver stream. so it can be used by the local algorithm.

Finally because these standards are completely separate of hardware and software requirements system like IBM, DEEP MIND and all other A.I systems can all work together for the good of all given the standards and idea proposed.  With these standards we may have true General Artificial Intelligence General or GAIG within a couple of years.

This idea is the intellectual property of myself and GPRSG.

General Picture Recognition Software