MathAware MIT: “On Intelligence”

https://github.com/Mathaware/MIT

Jeff Hawkins’ book “On Intelligence,” co-authored with Sandra Blakeslee, delves into his memory-prediction framework theory of the brain. This theory posits that the brain functions essentially as a prediction machine, where hierarchical regions of the brain predict future input sequences.

# Analyzing the provided keywords from the GitHub repositories list

keywords = [
    "chat", "chatbot", "gpt", "chat-application", "agent-based-framework",
    "agent-oriented-programming", "gpt-4", "chatgpt", "llmops", "gpt-35-turbo",
    "llm-agent", "llm-inference", "llm-framework", "code-interpreter",
    "langchain", "chatgpt-code-generation", "codeinterpreter", "prompt",
    "papers", "demonstration", "zero-shot-learning", "few-shot-learning",
    "prompt-tuning", "in-context-learning", "aigc", "prompt-engineering",
    "chain-of-thought", "instruction-tuning", "information-retrieval", "ai",
    "question-answering", "llama", "language-model", "agents",
    "multi-agent-systems", "rag", "openai-api", "gpt4", "local-llm",
    "retrieval-augmented-generation", "function-calling"
]

# Categorizing keywords based on their context and usage

categories = {
    "model_types": ["gpt", "gpt-4", "llm", "llama", "gpt4"],
    "applications": ["chat", "chatbot", "question-answering", "code-interpreter"],
    "programming_approaches": ["agent-based-framework", "agent-oriented-programming", "langchain"],
    "frameworks_and_tools": ["chat-application", "llmops", "llm-inference", "llm-framework", "prompt-engineering", "local-llm", "function-calling"],
    "research_and_development": ["papers", "demonstration", "zero-shot-learning", "few-shot-learning", "prompt-tuning", "in-context-learning", "aigc", "chain-of-thought", "instruction-tuning"],
    "advanced_techniques": ["information-retrieval", "retrieval-augmented-generation"]
}

# Outputting categorized keywords
print(categories)

These predictions are not always far into the future but are timely enough to provide practical utility to an organism. The brain is described as a feed-forward hierarchical state machine that learns through experience and is capable of controlling the behavior of the organism by responding to predicted future events based on past data.

The hierarchical structure allows for memorizing frequently observed sequences, which Hawkins terms “cognitive modules,” and developing invariant representations. Higher levels of the cortical hierarchy can predict further into the future or across a broader range of sensory input, while lower levels deal with more immediate and domain-specific sensory or motor information. One key component of the framework is Hebbian learning, the idea that neurons and their connections physically change as learning occurs. The concept of a cortical column, as formulated by Vernon Mountcastle, plays a central role in the framework. Hawkins emphasizes the importance of interconnections between columns and the collective activation of columns, proposing that a cortical column represents a state within the state machine. Hawkins makes several testable predictions related to his theory. For instance, he predicts the existence of “anticipatory cells” in the cortex that fire in anticipation of sensory events. He also hypothesizes that learned sequences are represented by “name cells” that remain active throughout a sequence, and “exception cells” that fire only when a prediction fails, signaling an unexpected event.

The book also explores the implications of this theory for the creation of intelligent machines, aiming to replicate the properties of the neocortex through technologies like Hierarchical Temporal Memory (HTM) developed by Hawkins’ company, Numenta Inc. The book provides a comprehensive view of how a deep understanding of the brain’s functioning could lead to advancements in artificial intelligence.