AI.Society
  • Disclaimer
  • AI.Society
  • Introduction
    • Overview
    • Mission
    • Roadmap
  • AI and the Metaverse: Past and Present
    • Market Opportunity
    • Evolution of AI Technology
    • Growth of the Metaverse
  • Convergence of AI and the Metaverse
    • Why the Metaverse?
    • AI NPCs in the Metaverse
    • AI.S NPCs
    • Metaverse Customization and Personalization
  • Token Economy in the Metaverse
    • AIS Token: The Native Currency
    • Tokenomics
    • Virtual Assets and NPC Employment
    • Data Quality and Response Filtering
    • Sustainability of AI.Society
  • Decentralized Language Learning
    • AI Engine & Multi-Device Training
    • AI Engine Structure
    • AIS Nodes
    • AIS Node Phases
    • AIS Referral Program
    • Node Disclaimer
  • Society Membership and Governance
    • DAO Mission
    • Decentralized Autonomous Organization Structure
Powered by GitBook
On this page
  1. Decentralized Language Learning

AI Engine & Multi-Device Training

PreviousSustainability of AI.SocietyNextAI Engine Structure

Last updated 1 year ago

AI Engine

AI engine servers as the core of AI.Society metaverse, as it provides the foundation for AI NPC(chatbot) functionalities and language learning capabilities. To ensure the scalability and efficiency of the AI engine, it leverages user-operated nodes. Nodes represent individual computing resources, and they play a vital role in the distributed infrastructure of AI.Society ecosystem. It is constantly learning from data provided by users and other sources. The AI engine is powered by a decentralized cloud of nodes. This makes the AI engine more efficient, secure, and diverse than traditional centralized systems.

Multi-device Training

Multi-device training is a type of parallel distributed training. Machine learning training is a slow process that requires many experiments with different options. Distributed machine learning solves this problem by parallelizing the training models on low-cost infrastructure in a clustered environment like Kubernetes or etc. This reduces the training time from hours to minutes.

The node used by AIS is a system that allows multiple users to contribute to AI learning. This AIS node system is called parallel distributed computing. Parallel distributed computing is a method of dividing a large problem into small sub-problems and processing them simultaneously using multiple computing devices.

It can improve the performance and efficiency of computing. You can divide a problem that cannot be processed by one computer into multiple computers. Also, each computer can communicate with each other and coordinate the work and combine the results.

Thus, this structure can save the cost and resources of computing. Parallel distributed computing enables high-performance computing using low-cost infrastructure.

As a result, AIS Nodes can increase the reliability and stability of computing efficiency. This structure will stabilize all AI services provided by AIS.

Why LLM with multiple devices?

LLM requires an enormous amount of data. LLM learns the patterns and knowledge of language from diverse and vast natural language data, such as text corpora, web pages, books, etc. The more data LLMs are trained on, the better they perform at various tasks, such as text generation, question answering, summarization, etc. Therefore, learning LLMs requires collecting and processing an enormous amount of data, which can be challenging and costly.

LLM requires fast learning. LLM needs to be learned repeatedly for a long time to achieve high quality and accuracy. This is because LLMs have many parameters that need to be optimized using gradient-based methods, such as stochastic gradient descent or Adam. These methods require updating the parameters based on the feedback from the data, which can take many iterations and epochs. Learning LLMs requires fast learning algorithms and hardware that can speed up the training process.

Benefit of Data Parrallelism
AI Engine Cycle