A step-by-step implementation tutorial to create a modular AI workflow by API and Langgraph’s Cloud Sonnet 7.7

In this tutorial, we provide a practical guide to the langograph, well-organized, graph-based AI orchestration framework, uniformly integrated, unified, enforced, enforcement of anthropic. Optim Ptimise Detail for Google Colab, by executable code, developers learn how to build and visualize AI workflow, such as producing brief answers, analyzing critically and automatically composing technical blog content. Compact implementation illuminates the intuitive node-graph architecture of the Langograph. It can manage complex sequences of cloud-powered natural language functions, from basic question-to-answer views to advanced material generation pipelines.

from getpass import getpass
import os


anthropic_key = getpass("Enter your Anthropic API key: ")


os.environ("ANTHROPIC_API_KEY") = anthropic_key


print("Key set:", "ANTHROPIC_API_KEY" in os.environ)

We safely ask users to input their anthropic API key using Python’s Getpass Module, ensuring that sensitive data is not displayed. It then sets this key as an environment variable (anthropic_API_K) and confirms successful storage.

import os
import json
import requests
from typing import Dict, List, Any, Callable, Optional, Union
from dataclasses import dataclass, field
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import display, HTML, clear_output

We import the required libraries for creating structured AI workflows and visualizing. These include modules for data (JSOn, requests, dataclasis), graph creation and visualization (networkX, metplotlib), interactive notebook display (IPethon.splay), and type OT notes for clarity and maintenance.

try:
    import anthropic
except ImportError:
    print("Installing anthropic package...")
    !pip install -q anthropic
    import anthropic


from anthropic import Anthropic

We make sure the anthropic python package is available for use. It tries to import the module and, if not found, installs it automatically using PIP in Google Colab environment. After installation, it imports anthropic client, which is required to communicate with cloud models via anthropic API. 4O

@dataclass
class NodeConfig:
    name: str
    function: Callable
    inputs: List(str) = field(default_factory=list)
    outputs: List(str) = field(default_factory=list)
    config: Dict(str, Any) = field(default_factory=dict)

This nodconfig data defines the composition of each node in the class Langograph workflow. Each node has an optional outline dictionary for storing a name, executable function, alternative inputs and output and additional parameters. This setup allows modular, reusable node definitions for graph-based AI functions.

class LangGraph:
    def __init__(self, api_key: Optional(str) = None):
        self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
        if not self.api_key:
            from google.colab import userdata
            try:
                self.api_key = userdata.get('ANTHROPIC_API_KEY')
                if not self.api_key:
                    raise ValueError("No API key found")
            except:
                print("No Anthropic API key found in environment variables or Colab secrets.")
                self.api_key = input("Please enter your Anthropic API key: ")
                if not self.api_key:
                    raise ValueError("Please provide an Anthropic API key")
       
        self.client = Anthropic(api_key=self.api_key)
        self.graph = nx.DiGraph()
        self.nodes = {}
        self.state = {}
   
    def add_node(self, node_config: NodeConfig):
        self.nodes(node_config.name) = node_config
        self.graph.add_node(node_config.name)
        for input_node in node_config.inputs:
            if input_node in self.nodes:
                self.graph.add_edge(input_node, node_config.name)
        return self
   
    def claude_node(self, name: str, prompt_template: str, model: str = "claude-3-7-sonnet-20250219",
                   inputs: List(str) = None, outputs: List(str) = None, system_prompt: str = None):
        """Convenience method to create a Claude API node"""
        inputs = inputs or ()
        outputs = outputs or (name + "_response")
       
        def claude_fn(state, **kwargs):
            prompt = prompt_template
            for k, v in state.items():
                if isinstance(v, str):
                    prompt = prompt.replace(f"{{{k}}}", v)
           
            message_params = {
                "model": model,
                "max_tokens": 1000,
                "messages": ({"role": "user", "content": prompt})
            }
           
            if system_prompt:
                message_params("system") = system_prompt
               
            response = self.client.messages.create(**message_params)
            return response.content(0).text
       
        node_config = NodeConfig(
            name=name,
            function=claude_fn,
            inputs=inputs,
            outputs=outputs,
            config={"model": model, "prompt_template": prompt_template}
        )
        return self.add_node(node_config)
   
    def transform_node(self, name: str, transform_fn: Callable,
                      inputs: List(str) = None, outputs: List(str) = None):
        """Add a data transformation node"""
        inputs = inputs or ()
        outputs = outputs or (name + "_output")
       
        node_config = NodeConfig(
            name=name,
            function=transform_fn,
            inputs=inputs,
            outputs=outputs
        )
        return self.add_node(node_config)
   
    def visualize(self):
        """Visualize the graph"""
        plt.figure(figsize=(10, 6))
        pos = nx.spring_layout(self.graph)
        nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
                node_size=1500, arrowsize=20, font_size=10)
        plt.title("LangGraph Flow")
        plt.tight_layout()
        plt.show()
       
        print("\nGraph Structure:")
        for node in self.graph.nodes():
            successors = list(self.graph.successors(node))
            if successors:
                print(f"  {node} → {', '.join(successors)}")
            else:
                print(f"  {node} (endpoint)")
        print()
   
    def _get_execution_order(self):
        """Determine execution order based on dependencies"""
        try:
            return list(nx.topological_sort(self.graph))
        except nx.NetworkXUnfeasible:
            raise ValueError("Graph contains a cycle")
   
    def execute(self, initial_state: Dict(str, Any) = None):
        """Execute the graph in topological order"""
        self.state = initial_state or {}
        execution_order = self._get_execution_order()
       
        print("Executing LangGraph flow:")
       
        for node_name in execution_order:
            print(f"- Running node: {node_name}")
            node = self.nodes(node_name)
            inputs = {k: self.state.get(k) for k in node.inputs if k in self.state}
           
            result = node.function(self.state, **inputs)
           
            if len(node.outputs) == 1:
                self.state(node.outputs(0)) = result
            elif isinstance(result, (list, tuple)) and len(result) == len(node.outputs):
                for i, output_name in enumerate(node.outputs):
                    self.state(output_name) = result(i)
       
        print("Execution completed!")
        return self.state


def run_example(question="What are the key benefits of using a graph-based architecture for AI workflows?"):
    """Run an example LangGraph flow with a predefined question"""
    print(f"Running example with question: '{question}'")
   
    graph = LangGraph()
   
    def question_provider(state, **kwargs):
        return question
   
    graph.transform_node(
        name="question_provider",
        transform_fn=question_provider,
        outputs=("user_question")
    )
   
    graph.claude_node(
        name="question_answerer",
        prompt_template="Answer this question clearly and concisely: {user_question}",
        inputs=("user_question"),
        outputs=("answer"),
        system_prompt="You are a helpful AI assistant."
    )
   
    graph.claude_node(
        name="answer_analyzer",
        prompt_template="Analyze if this answer addresses the question well: Question: {user_question}\nAnswer: {answer}",
        inputs=("user_question", "answer"),
        outputs=("analysis"),
        system_prompt="You are a critical evaluator. Be brief but thorough."
    )
   
    graph.visualize()
   
    result = graph.execute()
   
    print("\n" + "="*50)
    print("EXECUTION RESULTS:")
    print("="*50)
    print(f"\n🔍 QUESTION:\n{result.get('user_question')}\n")
    print(f"📝 ANSWER:\n{result.get('answer')}\n")
    print(f"✅ ANALYSIS:\n{result.get('analysis')}")
    print("="*50 + "\n")
   
    return graph

The Langograph Class applies a lighter weight structure for the construction and execution of graph-based AI workflow using a cloud from anthropic. It allows users to define modular nodes, either cloud -powered prompts or custom transformation functions, connect them by dependence, visualize the entire pipeline and operate them in the topological order. Run_Ex sample function shows this by demonstrating the clarity and modularity of the architecture of the Langgraph, creating a simple question-answer and evaluation flow.

def run_advanced_example():
    """Run a more advanced example with multiple nodes for content generation"""
    graph = LangGraph()
   
    def topic_selector(state, **kwargs):
        return "Graph-based AI systems"
   
    graph.transform_node(
        name="topic_selector",
        transform_fn=topic_selector,
        outputs=("topic")
    )
   
    graph.claude_node(
        name="outline_generator",
        prompt_template="Create a brief outline for a technical blog post about {topic}. Include 3-4 main sections only.",
        inputs=("topic"),
        outputs=("outline"),
        system_prompt="You are a technical writer specializing in AI technologies."
    )
   
    graph.claude_node(
        name="intro_writer",
        prompt_template="Write an engaging introduction for a blog post with this outline: {outline}\nTopic: {topic}",
        inputs=("topic", "outline"),
        outputs=("introduction"),
        system_prompt="You are a technical writer. Write in a clear, engaging style."
    )
   
    graph.claude_node(
        name="conclusion_writer",
        prompt_template="Write a conclusion for a blog post with this outline: {outline}\nTopic: {topic}",
        inputs=("topic", "outline"),
        outputs=("conclusion"),
        system_prompt="You are a technical writer. Summarize key points and include a forward-looking statement."
    )
   
    def assembler(state, introduction, outline, conclusion, **kwargs):
        return f"# {state('topic')}\n\n{introduction}\n\n## Outline\n{outline}\n\n## Conclusion\n{conclusion}"
   
    graph.transform_node(
        name="content_assembler",
        transform_fn=assembler,
        inputs=("topic", "introduction", "outline", "conclusion"),
        outputs=("final_content")
    )
   
    graph.visualize()
    result = graph.execute()
   
    print("\n" + "="*50)
    print("BLOG POST GENERATED:")
    print("="*50 + "\n")
    print(result.get("final_content"))
    print("\n" + "="*50)
   
    return graph

Run_Acdance_X Sample Function displays more sophisticated use of langograph by orchesting multiple cloud-powered nodes to create a full blog post. It starts by choosing a subject, then creates an outline, introduction and conclusion, all uses structured cloud prompts. Finally, the transformation assembles the node content in the formatted blog post. This example shows how Langgraphs can automate complex, multi-step content generation functions using modular, connected nodes in clear and executable flow.

print("1. Running simple question-answering example")
question = "What are the three main advantages of using graph-based AI architectures?"
simple_graph = run_example(question)


print("\n2. Running advanced blog post creation example")
advanced_graph = run_advanced_example()

Finally, we both trigger the execution of the defined Langograph workflow. First, it runs a simple Question-Ensure example by passing a predefined question in the Run_Ex sample () function. After that, it starts a more advanced blog post -generation workflow using the Run_AcDense_X sample (). Together, these calls show the practical flexibility of the Langgraph, from basic prompt-based interactions to multi-step content automation.

In conclusion, we have applied a Langgraph associated with anthropic’s cloud API, which explains the ease of designing modular AI workflows that benefit the powerful language models in structured, graph-based pipelines. Visualizing gains practical experience for creating a question process, analytical evaluation, outline and assembly of the question, such as a question process, analytical evaluation and assembly, maintaining developers, scalable AI systems. Langograph provides an efficient remedy for orchestrating complex AI processes, for rapid prototyping and implementation in environments such as Google Colab, especially the capabilities of the cloud.


Check the Colab notebook. All credit for this research goes to researchers of this project. Also, feel free to follow us Twitter And don’t forget to join us 95K+ ML Subredit And subscribe Our newsletter.


Asif Razzaq is the CEO of MarketechPost Media Inc. as a visionary entrepreneur and engineer, Asif is committed to increasing the possibility of artificial intelligence for social good. Their most recent effort is the inauguration of the artificial intelligence media platform, MarktecPost, for its depth of machine learning and deep learning news for its depth of coverage .This is technically sound and easily understandable by a large audience. The platform has more than 2 million monthly views, showing its popularity among the audience.

Scroll to Top