header banner
Default

How Should AI Be Decentralized?


The intersection of Web3 and artificial intelligence (AI), specifically in the form of generative AI, has become one of the hottest topics of debate within the crypto community. After all, generative AI is revolutionizing all areas of traditional software stacks, and Web3 is no exception. Given that decentralization is the core value proposition of Web3, many of the emergent Web3-generative-AI projects and scenarios project some form of decentralized generative AI value proposition.

Jesus Rodriguez is the CEO of IntoTheBlock.

In Web3, we have a long history of looking at every domain through a decentralization lens, but the reality is that not all domains can benefit from decentralization, and for every domain, there is a spectrum of decentralization scenarios. Breaking down that idea from a first principles standpoint leads us to two key questions:

  1. Does generative AI deserve to be decentralized?

  2. Why hasn't decentralized AI worked at scale before, and what's different with generative AI?

  3. What are the different dimensions of decentralization in generative AI?

These questions are far from trivial, and each one can spark passionate debates. However, I believe that thinking through these questions is essential to develop a comprehensive thesis about the opportunities and challenges at the intersection of Web3 and generative AI.

Does AI Deserve to be Decentralized?

VIDEO: Decentralized Artificial Intelligence
Siraj Raval

The philosophical case for decentralizing AI is simple. AI is digital knowledge, and knowledge might be the number one construct of the digital world that deserves to be decentralized. Throughout the history of Web3, we have made many attempts to decentralize things that work extremely well in a centralized architecture, and where decentralization didn't provide obvious benefits. Knowledge is not one of the natural candidates for decentralization from both the technical and economic standpoint.

The level of control being accumulated by the big AI providers is creating a massive gap with the rest of the competition to the point that it is becoming scary. AI does not evolve linearly or even exponentially; it follows a multi-exponential curve.

GPT-4 represents a massive improvement over GPT 3.5 across many dimensions, and that trajectory is likely to continue. At some point, it becomes unfeasible to try to compete with centralized AI providers. A well-designed decentralized network model could enable an ecosystem in which different parties collaborate to improve the quality of models, which enables democratic access to knowledge and sharing of the benefits.

Transparency is the second factor that can be considered when evaluating the merits of decentralization in AI. Foundation model architectures involve millions of interconnected neurons across several layers, making it impractical to understand using traditional monitoring practices. Nobody really understands what happens inside GPT-4, and OpenAI has no incentives to be more transparent in that area. Decentralized AI networks could enable open testing benchmarks and guardrails that provide visibility into the functioning of foundation models without requiring trust in a specific provider.

Why Hasn’t Decentralized AI Worked Until Now?

VIDEO: From centralized to self-organized: Towards decentralized AI | Abhishek Singh | TEDxBoston
TEDx Talks

If the case for decentralized AI is so clear, then why haven't we seen any successful attempts in this area? After all, decentralized AI is not a new idea, and many of its principles date back to the early 1990s. Without getting into technicalities, the main reason for the lack of success of decentralized AI approaches is that the value proposition was questionable at best.

Before large foundation models came into the scene, the dominant architecture paradigm was different forms of supervised learning that required highly curated and labeled datasets, which resided mostly within corporate boundaries. Additionally, the models were small enough to be easily interpretable using mainstream tools. Finally, the case for control was also very weak, as no models were strong enough to cause any level of concern.

In a somewhat paradoxical twist, the prominence of large-scale generative AI and foundation models in a centralized manner helped make the case for decentralized AI viable for the first time in history.

Now that we understand that AI deserves to be decentralized and that this time is somewhat different from previous attempts, we can start thinking about which specific elements require decentralization.

The Dimensions of Decentralization in AI

VIDEO: Decentralized AI | Ben Goertzel | TEDxBerkeley
TEDx Talks

When it comes to generative AI, there is no single approach to decentralization. Instead, decentralization should be considered in the context of the different phases of the lifecycle of foundation models. Here are three main stages in the operational lifespan of foundation models that are relevant to decentralization:

  1. Pre-training is the stage in which a model is trained on large volumes of unlabeled and labeled datasets.

  2. Fine-tuning, which is typically optional, is the phase in which a model is “retrained” on domain-specific datasets to optimize its performance on different tasks.

  3. Inference is the stage in which a model outputs predictions based on specific inputs.

Throughout these three phases, there are different dimensions that are good candidates for decentralization.

The Compute Decentralization Dimension

VIDEO: AI + Blockchain + DAO = Success! How Decentralized Autonomous Organizations will control EVERYTHING!
David Shapiro

Decentralized computing can be incredibly relevant during pre-training and finetuning and may be less relevant during inference. Foundation models notoriously require large cycles of GPU compute, which are typically executed in centralized data centers. The notion of a decentralized GPU compute network in which different parties can supply compute for the pre-training and finetuning of models could help remove the control that large cloud providers have over the creation of foundation models.

The Data Decentralization Dimension

VIDEO: AI Should Be Decentralized, But How?
Crypto World Daily

Data decentralization could play an incredibly important role during the pre-training and fine-tuning phases. Currently, there is very little transparency around the concrete composition of datasets used to pretrain and finetune foundation models. A decentralized data network could incentivize different parties to supply datasets with appropriate disclosures and track their usage in pretraining and fine-tuning foundation models.

The Optimization Decentralization Dimension

VIDEO: Machine Learning on Blockchain : Decentralizing AI with Bittensor
Builder Capital

Many phases during the lifecycle of foundation models require validations, often in the form of human intervention. Notably, techniques such as reinforcement learning with human feedback (RLHF) enable the transition from GPT-3 to ChatGPT by having humans validate the outputs of the model to provide better alignment with human interests. This level of validation is particularly relevant during the fine-tuning phases, and currently, there is very little transparency around it. A decentralized network of human and AI validators that perform specific tasks, whose results are immediately traceable, could be a significant improvement in this area.

The Evaluation Decentralization Dimension

VIDEO: Decentralized AI Blockchain Network | Q&A | NOV 2023
Tau Net

If I were to ask you to select the best language model for a specific task, you would have to guess the answer. AI benchmarks are fundamentally broken, there is very little transparency around them, and they require quite a bit of trust in the parties who created them. Decentralizing the evaluation of foundation models for different tasks is an incredibly important task to increase transparency in the space. This dimension is particularly relevant during the inference phase.

The Model Execution Decentralization Dimension

VIDEO: "The Crypto Opportunity EVEN BIGGER Than Bitcoin" - Elon Musk 2023 Prediction
Jamie Tree

Finally, the most obvious area of decentralization. Using foundation models today requires trust in infrastructures controlled by a centralized party. Providing a network in which inference workloads can be distributed across different parties is quite an interesting challenge that can bring a tremendous amount of value to the adoption of foundation models.

The right way to do AI

VIDEO: Warren Buffett: Why You Should NEVER Invest In Bitcoin (UNBELIEVABLE)
FREENVESTING

Foundation models propelled AI to mainstream adoption and also accelerated all the challenges that come with the rapidly increasing capabilities of these models. Among these challenges, the case for decentralization has never been stronger.

Digital knowledge deserves to be decentralized across all its dimensions: data, compute, validation, optimization, execution. No centralized entity deserves to have that much power over the future of intelligence. The case for decentralized AI is clear, but the technical challenges are tremendous. Decentralizing AI is going to require more than one technical breakthrough, but the goal is certainly achievable. In the era of foundation models, decentralized AI is the right way to approach AI.

Edited by Ben Schiller.

Sources


Article information

Author: Stephen Edwards

Last Updated: 1702704482

Views: 1007

Rating: 3.7 / 5 (43 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Stephen Edwards

Birthday: 1924-09-25

Address: 531 Robert Radial Suite 072, Toddburgh, MO 52471

Phone: +3678755021511340

Job: Environmental Scientist

Hobby: Bowling, Skiing, Fencing, Aquarium Keeping, Fishing, Playing Chess, Bird Watching

Introduction: My name is Stephen Edwards, I am a vivid, Precious, strong-willed, ingenious, exquisite, esteemed, clever person who loves writing and wants to share my knowledge and understanding with you.