News

  • brain ai

    Brain-Inspired AI Model Unveils Smarter Approach to Reasoning

    The rise of artificial intelligence has hit practical limits under current scaling rules. Teams now seek smarter designs that mimic how the human brain organizes thought. A recent paper in Patterns by Ge Wang and Feng-Lei Fan proposes adding a “height” dimension inside layers. This idea uses intra-layer links and feedback loops to enrich computation without simply growing width or depth. Why it matters: the proposal aims for more energy-efficient, interpretable models that can reason with fewer resources. It responds to 2024 evidence that longer training times and larger size no longer guarantee gains. The plan builds on decades of neural networks research and the legacy of Hopfield and Hinton, whose work helped shape modern breakthroughs. Readers who want to read full details will find the study lays out practical steps toward humanlike intelligence and clearer model behavior. Why brain-inspired designs matter now in artificial intelligence With return-on-investment from sheer scaling waning, architects now probe structure over scale. The transformer ceiling has become an engineering constraint: larger models and more training data no longer deliver steady leaps. That slowdown drives interest in better internal organization. From scaling limits to structured complexity: the transformer ceiling Structured complexity means purposeful wiring inside layers, not blind growth. Research in 2024 showed that adding parameters and data hit diminishing returns. Engineers face challenges in stability, long-range dependencies, and generalization without huge compute budgets. Thinking in three dimensions: width, depth, and the “height” leap Ge Wang’s city-building analogy helps. Width is rooms per floor, depth is floors, and height adds hallways that let rooms talk. Height adds intra-layer links that mimic lateral cortical connections and enrich local computation. This design focuses on how neurons route information so models can reason with fewer parameters. The goal is a more efficient system that keeps or improves function while cutting reliance on ever-larger scale. New brain-inspired AI: adding a “height” dimension to models for humanlike reasoning Researchers are exploring structural changes inside layers to bring models closer to human reasoning. Wang and Fan’s new study shows how wiring within a layer can act like cortical lateral connections and enable richer local computation. Intra-layer links as cortical-like lateral connections Intra-layer links let neurons in the same layer exchange information directly. This boosts the quality of local function without widening or deepening the network. The result is denser computation per parameter and clearer internal dynamics. Feedback loops for memory and iterative refinement Feedback loops feed outputs back as inputs over time. That framing supports short-term memory and sharper perception. A model can refine a noisy face or an ambiguous sentence by iterating until the output stabilizes. Phase transitions and intuition-like behavior The study borrows a physics view: phase transitions describe how a system shifts from vague to confident patterns. As context accumulates, the model crosses thresholds and settles into meaningful states, producing intuition-like outputs. Practical gains: this design promises smarter, more energy efficient systems with greater transparency. It could provide tools to probe cognition and may aid research into neurological disorders. Read full details in the Patterns paper for implementation cues. Oscillations and long-horizon learning: LinOSS and rhythmic sharing come to the fore Oscillatory principles are emerging as a practical route to stable, long-horizon sequence learning. Two teams of researchers translated rhythmic dynamics from physics and biology into machine learning systems that track very long data streams. MIT CSAIL’s LinOSS LinOSS is a type of linear oscillatory state-space model introduced by T. Konstantin Rusch and Daniela Rus. The method draws on forced harmonic oscillators seen in physics and neural rhythms in biology to keep predictions stable across hundreds of thousands of steps. LinOSS provides universal approximation for continuous, causal sequence mappings and avoids brittle parameter choices that break many models. In benchmarks it beat strong baselines, nearly doubling performance over Mamba on extreme-length tasks while remaining computationally efficient. The work earned an ICLR 2025 oral spot and broad support from funders such as the Swiss National Science Foundation, Schmidt AI2050, and the U.S. Department of the Air Force AI Accelerator. Applications span health-care analytics, climate forecasting, autonomy, and finance. UMD’s rhythmic sharing At the University of Maryland, Wolfgang Losert and Hoony Kang developed “rhythmic sharing,” a training method that syncs models with neural rhythms. The approach helps flag early warning signals in domains like cancer and climate. The method won UMD’s overall Invention of the Year Award at Innovate Maryland, highlighting how teams and scientists are converging on oscillation-based approaches. Together, LinOSS and rhythmic sharing show that modeling temporal structure can boost prediction in data-rich settings where early, stable signals matter. Conclusion A tighter focus on internal wiring offers a practical path to smarter, more efficient systems. The height dimension and intra-layer links from the study, paired with oscillation methods like LinOSS and rhythmic sharing, show how models can learn over time with less compute. Researchers and teams can use these ideas to build systems that track long signals in health and climate, and to make decisions that are easier to inspect. This work draws on physics, biology, and neural networks to shape function that looks more like the human brain when useful. Progress will need community trials and robust research. Read full papers to follow how this innovation matures and how models and systems evolve in real life.

    Read More »
  • Broadcom to integrate Nvidia AI

    Broadcom to integrate Nvidia AI technologies into VMware Cloud Foundation

    VMware Cloud Foundation 9.0 is now generally available as an AI‑native release that bundles Private AI Services. This launch aims at enterprises that need proven infrastructure and unified cloud operations. The platform pairs enterprise software leadership with trusted hardware partners. It brings GPUs, advanced networking, and familiar workflows into a single cloud foundation that customers can adopt with minimal disruption.…

    Read More »
  • mark and meta logo

    Meta AI Division Faces Hiring Freeze Amid Restructuring

    Meta has imposed an abrupt hiring freeze across its artificial intelligence division this week as part of a broader restructuring of its “Superintelligence Lab,” according to people familiar with the matter who spoke to The Wall Street Journal (journal reported). The freeze bars both external hiring and internal team transfers unless Chief AI Officer Alexandr Wang approves exceptions.The move follows…

    Read More »
  • Elon Musk’s with xAI logo

    Elon Musk’s xAI Launches Lawsuit Against Apple & OpenAI

    A landmark legal battle is reshaping the artificial intelligence landscape. A prominent AI company filed a federal lawsuit this week against Apple and OpenAI, alleging anticompetitive behavior in smartphone app markets and chatbot development. The complaint claims both tech firms engaged in practices that limit fair competition and innovation. Filed in Texas federal court, the case focuses on alleged partnerships…

    Read More »
  • Nvidia

    Nvidia Q2 Earnings Put AI Market Hopes to the Test

    This second-quarter earnings report arrives as a pivotal check on a company that transformed from gaming into an artificial intelligence infrastructure leader in about two years. Consensus models peg revenue near $45.9–$46.0 billion and about $1.01 per share in EPS, figures analysts and investors will scan closely. Data center sales drove 88% of revenue last quarter, concentrating risk around hyperscaler…

    Read More »
  • data place

    $3 Billion AI Data Center Project Announced for North Dakota

    Discover the $3 billion AI Data Center project coming to North Dakota, shaping the future of tech infrastructure.

    Read More »
  • gpt-oss

    OpenAI Introduces ‘Harmony,’ the New Standard Format for GPT-OSS Models

    OpenAI's 'Harmony' format aims to establish a new standard for GPT-OSS models, promoting regulatory harmony.

    Read More »
  • AI and Healthcare

    Ensuring the Ethical Use of AI in Healthcare: Best Practices for 2025

    Get insights into the role of AI in Medicine and its ethical implications for 2025. Understand the emerging trends and practices in healthcare.

    Read More »
  • elon musk with grok logo

    xAI Releases Grok 2.5 as Open Source, Musk Confirms Grok 3 Coming Soon

    Elon Musk's xAI open sources Grok2.5, confirms Grok3 is coming soon. Stay updated on the AI breakthroughs.

    Read More »
  • Gartner LOGO

    Gartner Identifies Top AI Innovations for 2025

    Gartner Identifies Top AI Innovations for 2025. Learn how these advancements will impact business and technology.

    Read More »
Back to top button