📦 Projeto Python - Google Labs Flow API (Engenharia Reversa) Vou criar um projeto Python completo e profissional com a API reversa documentada. Aqui está a estrutura: 📁 Estrutura do Projeto google-labs-flow-api/ ├── README.md ├── requirements.txt ├── setup.py ├── .env.example ├── .gitignore ├── examples/ │ ├── text_to_image.py │ ├── image_to_image.py │ ├── image_to_video.py │ └── batch_generation.py ├── docs/ │ ├── API_DOCUMENTATION.md │ ├── ARCHITECTURE.md │ └── LIMITATIONS.md ├── flow_api/ │ ├── __init__.py │ ├── client.py │ ├── models.py │ ├── exceptions.py │ ├── utils.py │ └── constants.py └── tests/ ├── __init__.py ├── test_client.py └── test_models.py 📄 Arquivos do Projeto 1. README.md markdown # Google Labs Flow API - Reverse Engineered 🔬 Implementação Python completa da API do Google Labs Flow baseada em engenharia reversa. ## ⚠️ Disclaimer Este projeto é resultado de engenharia reversa para fins educacionais. **NÃO** é uma API oficial do Google. Use por sua conta e risco. ## 🚀 Features - ✅ Text-to-Image (Geração de imagens a partir de texto) - ✅ Image-to-Image (Transformação com referências visuais) - ✅ Image-to-Video (Geração de vídeos animados) - ✅ Suporte a múltiplos modelos (Imagen 4, Nano Banana, Gemini Pix 2) - ✅ Sistema de créditos e rate limiting - ✅ Gerenciamento de sessão e autenticação ## 📦 Instalação ```bash # Clone o repositório git clone https://github.com/your-username/google-labs-flow-api.git cd google-labs-flow-api # Instale as dependências pip install -r requirements.txt # Configure as variáveis de ambiente cp .env.example .env # Edite .env com suas credenciais ``` ## 🔧 Configuração Crie um arquivo `.env` com: ```env PROJECT_ID=seu-project-id SESSION_TOKEN=seu-session-token RECAPTCHA_KEY=6LdsFiUsAAAAAIjVDZcuLhaHiDn5nnHVXVRQGeMV API_KEY=AIzaSyBtrm0o5ab1c-Ec8ZuLcGt3oJAA5VWt3pY ``` ## 🎯 Uso Rápido ### Text-to-Image ```python from flow_api import FlowClient client = FlowClient( project_id="your-project-id", session_token="your-session-token" ) # Gerar imagem result = client.generate_image( prompt="Uma sereia cyberpunk nadando em oceano neon", model="GEM_PIX_2", aspect_ratio="portrait" ) print(f"Imagem gerada: {result.image_url}") ``` ### Image-to-Image ```python # Transformar imagem existente result = client.generate_image_from_reference( prompt="Transforme em estilo steampunk dourado", reference_image_id="media-id-aqui", model="GEM_PIX_2" ) ``` ### Image-to-Video ```python # Gerar vídeo animado result = client.generate_video( prompt="O dragão bate as asas majestosamente", reference_images=["image-id-1", "image-id-2"], model="veo_3_1_r2v_fast_portrait" ) # Aguardar conclusão video_url = client.wait_for_video(result.operation_id) ``` ## 📚 Documentação - [API Documentation](docs/API_DOCUMENTATION.md) - [Architecture](docs/ARCHITECTURE.md) - [Limitations](docs/LIMITATIONS.md) ## 🛡️ Limitações - ❌ Requer cookies de sessão válidos - ❌ Requer tokens reCAPTCHA - ❌ Rate limiting aplicado - ❌ Sistema de créditos (700 iniciais) - ❌ Não é API oficial ## 🤝 Contribuindo Contribuições são bem-vindas! Por favor, leia CONTRIBUTING.md primeiro. ## 📝 Licença MIT License - veja LICENSE para detalhes. ## ⚖️ Legal Este projeto é para fins educacionais. Não somos afiliados ao Google. Use a API oficial quando disponível. 2. requirements.txt txt requests>=2.31.0 python-dotenv>=1.0.0 pydantic>=2.0.0 typing-extensions>=4.8.0 urllib3>=2.0.0 3. setup.py python from setuptools import setup, find_packages with open("README.md", "r", encoding="utf-8") as fh: long_description = fh.read() setup( name="google-labs-flow-api", version="1.0.0", author="Reverse Engineering Team", author_email="your@email.com", description="Unofficial Python client for Google Labs Flow API", long_description=long_description, long_description_content_type="text/markdown", url="https://github.com/your-username/google-labs-flow-api", packages=find_packages(), classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Development Status :: 4 - Beta", "Intended Audience :: Developers", ], python_requires=">=3.8", install_requires=[ "requests>=2.31.0", "python-dotenv>=1.0.0", "pydantic>=2.0.0", ], ) 4. .env.example env # Google Labs Flow API Configuration PROJECT_ID=your-project-id-here SESSION_TOKEN=your-session-token-here SESSION_ID=your-session-id-here # API Keys API_KEY=AIzaSyBtrm0o5ab1c-Ec8ZuLcGt3oJAA5VWt3pY RECAPTCHA_SITE_KEY=6LdsFiUsAAAAAIjVDZcuLhaHiDn5nnHVXVRQGeMV # Optional RECAPTCHA_TOKEN=your-recaptcha-token-here DEBUG=false 5. .gitignore gitignore # Python __pycache__/ *.py[cod] *$py.class *.so .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ *.egg-info/ .installed.cfg *.egg # Virtual Environment venv/ env/ ENV/ # Environment Variables .env .env.local # IDE .vscode/ .idea/ *.swp *.swo *~ # OS .DS_Store Thumbs.db # Testing .pytest_cache/ .coverage htmlcov/ # Logs *.log 6. flow_api/init.py python """ Google Labs Flow API - Unofficial Python Client Based on reverse engineering of labs.google/fx/pt/tools/flow """ from .client import FlowClient from .models import ( ImageGenerationRequest, ImageGenerationResponse, VideoGenerationRequest, VideoGenerationResponse, AspectRatio, ImageModel, VideoModel, ) from .exceptions import ( FlowAPIError, AuthenticationError, RateLimitError, InsufficientCreditsError, ) __version__ = "1.0.0" __all__ = [ "FlowClient", "ImageGenerationRequest", "ImageGenerationResponse", "VideoGenerationRequest", "VideoGenerationResponse", "AspectRatio", "ImageModel", "VideoModel", "FlowAPIError", "AuthenticationError", "RateLimitError", "InsufficientCreditsError", ] 7. flow_api/constants.py python """ Constants and configuration for Google Labs Flow API """ # Base URLs BASE_URL = "https://aisandbox-pa.googleapis.com" LABS_BASE_URL = "https://labs.google/fx" # API Endpoints ENDPOINTS = { "generate_images": f"{BASE_URL}/v1/projects/{{project_id}}/flowMedia:batchGenerateImages", "generate_video": f"{BASE_URL}/v1/video:batchAsyncGenerateVideoReferenceImages", "check_video_status": f"{BASE_URL}/v1/video:batchCheckAsyncVideoGenerationStatus", "credits": f"{BASE_URL}/v1/credits", "recommendations": f"{BASE_URL}/v1:fetchUserRecommendations", } # Image Models class ImageModel: IMAGEN_4 = "IMAGEN_4" NANO_BANANA = "NANO_BANANA" NANO_BANANA_PRO = "NANO_BANANA_PRO" GEM_PIX_2 = "GEM_PIX_2" # Video Models class VideoModel: VEO_3_1_FAST_PORTRAIT = "veo_3_1_r2v_fast_portrait" VEO_3_1_FAST_LANDSCAPE = "veo_3_1_r2v_fast_landscape" VEO_3_1_SLOW_PORTRAIT = "veo_3_1_r2v_slow_portrait" VEO_3_1_SLOW_LANDSCAPE = "veo_3_1_r2v_slow_landscape" # Aspect Ratios class AspectRatio: PORTRAIT = "IMAGE_ASPECT_RATIO_PORTRAIT" # 9:16 (768x1376) LANDSCAPE = "IMAGE_ASPECT_RATIO_LANDSCAPE" # 16:9 # Image Input Types class ImageInputType: REFERENCE = "IMAGE_INPUT_TYPE_REFERENCE" FRAME = "IMAGE_INPUT_TYPE_FRAME" ELEMENT = "IMAGE_INPUT_TYPE_ELEMENT" # Tool Types TOOL_PINHOLE = "PINHOLE" # reCAPTCHA RECAPTCHA_SITE_KEY = "6LdsFiUsAAAAAIjVDZcuLhaHiDn5nnHVXVRQGeMV" RECAPTCHA_APPLICATION_TYPE = "RECAPTCHA_APPLICATION_TYPE_WEB" # API Key (for secondary endpoints) API_KEY = "AIzaSyBtrm0o5ab1c-Ec8ZuLcGt3oJAA5VWt3pY" # Defaults DEFAULT_IMAGE_MODEL = ImageModel.GEM_PIX_2 DEFAULT_VIDEO_MODEL = VideoModel.VEO_3_1_FAST_PORTRAIT DEFAULT_ASPECT_RATIO = AspectRatio.PORTRAIT # Rate Limiting MAX_RETRIES = 3 RETRY_DELAY = 2 # seconds VIDEO_POLL_INTERVAL = 3 # seconds VIDEO_TIMEOUT = 300 # seconds (5 minutes) # Credits ESTIMATED_CREDITS = { "image": 2, "video_fast": 40, "video_slow": 80, } 8. flow_api/exceptions.py python """ Custom exceptions for Flow API """ class FlowAPIError(Exception): """Base exception for Flow API errors""" pass class AuthenticationError(FlowAPIError): """Raised when authentication fails""" pass class RateLimitError(FlowAPIError): """Raised when rate limit is exceeded""" pass class InsufficientCreditsError(FlowAPIError): """Raised when user doesn't have enough credits""" pass class VideoGenerationError(FlowAPIError): """Raised when video generation fails""" pass class RecaptchaError(FlowAPIError): """Raised when reCAPTCHA validation fails""" pass class InvalidParameterError(FlowAPIError): """Raised when invalid parameters are provided""" pass Devido ao limite de resposta, vou criar um arquivo ZIP conceitual com os arquivos principais. Você quer que eu continue gerando os arquivos restantes (client.py, models.py, etc.) ou prefere que eu forneça um resumo de como baixar/criar o projeto completo? Os arquivos mais importantes que faltam são: flow_api/client.py (Cliente principal da API) flow_api/models.py (Modelos Pydantic) flow_api/utils.py (Funções auxiliares) examples/*.py (Exemplos de uso) docs/*.md (Documentação detalhada) Gostaria que eu continue gerando esses arquivos? com certeza 🚀 Continuação do Projeto Python 9. flow_api/models.py python """ Data models for Flow API using Pydantic """ from typing import List, Optional, Dict, Any from pydantic import BaseModel, Field from enum import Enum class AspectRatioEnum(str, Enum): PORTRAIT = "IMAGE_ASPECT_RATIO_PORTRAIT" LANDSCAPE = "IMAGE_ASPECT_RATIO_LANDSCAPE" class ImageModelEnum(str, Enum): IMAGEN_4 = "IMAGEN_4" NANO_BANANA = "NANO_BANANA" NANO_BANANA_PRO = "NANO_BANANA_PRO" GEM_PIX_2 = "GEM_PIX_2" class VideoModelEnum(str, Enum): VEO_3_1_FAST_PORTRAIT = "veo_3_1_r2v_fast_portrait" VEO_3_1_FAST_LANDSCAPE = "veo_3_1_r2v_fast_landscape" VEO_3_1_SLOW_PORTRAIT = "veo_3_1_r2v_slow_portrait" VEO_3_1_SLOW_LANDSCAPE = "veo_3_1_r2v_slow_landscape" class ImageInputTypeEnum(str, Enum): REFERENCE = "IMAGE_INPUT_TYPE_REFERENCE" FRAME = "IMAGE_INPUT_TYPE_FRAME" ELEMENT = "IMAGE_INPUT_TYPE_ELEMENT" # ==================== REQUEST MODELS ==================== class RecaptchaContext(BaseModel): application_type: str = Field(default="RECAPTCHA_APPLICATION_TYPE_WEB") token: str class ClientContext(BaseModel): project_id: str recaptcha_context: RecaptchaContext session_id: str tool: str = Field(default="PINHOLE") class ImageInput(BaseModel): image_input_type: ImageInputTypeEnum name: str # Media ID or resource name class ImageGenerationRequest(BaseModel): client_context: ClientContext image_aspect_ratio: AspectRatioEnum = Field(default=AspectRatioEnum.PORTRAIT) image_inputs: List[ImageInput] = Field(default_factory=list) image_model_name: ImageModelEnum = Field(default=ImageModelEnum.GEM_PIX_2) prompt: str seed: Optional[int] = None class BatchImageGenerationRequest(BaseModel): client_context: ClientContext requests: List[ImageGenerationRequest] class ReferenceImage(BaseModel): image_id: str fife_url: str class VideoGenerationRequest(BaseModel): client_context: ClientContext video_aspect_ratio: str = Field(default="VIDEO_ASPECT_RATIO_PORTRAIT") model: VideoModelEnum = Field(default=VideoModelEnum.VEO_3_1_FAST_PORTRAIT) prompt: str seed: Optional[int] = None reference_images: List[ReferenceImage] = Field(default_factory=list) is_looped: bool = Field(default=False) class BatchVideoGenerationRequest(BaseModel): requests: List[VideoGenerationRequest] # ==================== RESPONSE MODELS ==================== class ImageDimensions(BaseModel): height: int width: int class ImageGenerationImageInput(BaseModel): image_input_type: str media_generation_id: str class ImageGenerationRequestData(BaseModel): image_generation_image_inputs: List[ImageGenerationImageInput] = Field(default_factory=list) class PromptInput(BaseModel): text_input: str class RequestData(BaseModel): image_generation_request_data: ImageGenerationRequestData = Field(default_factory=ImageGenerationRequestData) prompt_inputs: List[PromptInput] class GeneratedImage(BaseModel): aspect_ratio: str fife_url: str media_generation_id: str media_visibility: str model_name_type: str prompt: str request_data: RequestData seed: int workflow_id: str class Image(BaseModel): dimensions: ImageDimensions generated_image: GeneratedImage class Media(BaseModel): image: Image name: str workflow_id: str class WorkflowMetadata(BaseModel): batch_id: str create_time: str primary_media_id: str class Workflow(BaseModel): metadata: WorkflowMetadata name: str project_id: str class ImageGenerationResponse(BaseModel): media: List[Media] workflows: List[Workflow] @property def image_url(self) -> str: """Get the URL of the first generated image""" if self.media: return self.media[0].image.generated_image.fife_url return "" @property def media_id(self) -> str: """Get the media ID of the first generated image""" if self.media: return self.media[0].name return "" class VideoMetadata(BaseModel): aspect_ratio: str fife_url: str is_looped: bool media_generation_id: str media_visibility: str model: str prompt: str seed: int serving_base_uri: str class VideoOperationMetadata(BaseModel): type: str = Field(alias="@type") name: str video: VideoMetadata class VideoOperation(BaseModel): metadata: VideoOperationMetadata name: str class VideoOperationStatus(BaseModel): media_generation_id: str operation: VideoOperation scene_id: str status: str class VideoGenerationResponse(BaseModel): operations: List[VideoOperationStatus] remaining_credits: Optional[int] = None @property def video_url(self) -> Optional[str]: """Get the URL of the generated video (if completed)""" if self.operations and self.operations[0].status == "MEDIA_GENERATION_STATUS_SUCCESSFUL": return self.operations[0].operation.metadata.video.fife_url return None @property def is_complete(self) -> bool: """Check if video generation is complete""" return any(op.status == "MEDIA_GENERATION_STATUS_SUCCESSFUL" for op in self.operations) @property def operation_id(self) -> str: """Get the operation ID for polling""" if self.operations: return self.operations[0].operation.name return "" class CreditsResponse(BaseModel): remaining_credits: int total_credits: int = Field(default=700) 10. flow_api/utils.py python """ Utility functions for Flow API """ import random import time from typing import Optional from .exceptions import RecaptchaError def generate_seed() -> int: """Generate a random seed for image/video generation""" return random.randint(100000, 999999) def get_recaptcha_token(site_key: str) -> str: """ Get reCAPTCHA token. WARNING: This is a placeholder. In production, you need to: 1. Use a real browser automation tool (Selenium/Playwright) 2. Implement reCAPTCHA solving service 3. Or manually obtain tokens Args: site_key: reCAPTCHA site key Returns: reCAPTCHA token string Raises: RecaptchaError: If token cannot be obtained """ raise RecaptchaError( "reCAPTCHA token generation not implemented. " "You must provide a valid token manually or use a reCAPTCHA solving service. " "See docs/LIMITATIONS.md for more information." ) def retry_with_backoff(func, max_retries: int = 3, delay: int = 2): """ Retry a function with exponential backoff Args: func: Function to retry max_retries: Maximum number of retries delay: Initial delay in seconds Returns: Result of the function Raises: Last exception if all retries fail """ last_exception = None for attempt in range(max_retries): try: return func() except Exception as e: last_exception = e if attempt < max_retries - 1: wait_time = delay * (2 ** attempt) print(f"Attempt {attempt + 1} failed. Retrying in {wait_time}s...") time.sleep(wait_time) raise last_exception def extract_media_id(resource_name: str) -> str: """ Extract media ID from resource name Args: resource_name: Full resource name (e.g., "projects/.../media/abc123") Returns: Media ID (e.g., "abc123") """ if "/" in resource_name: return resource_name.split("/")[-1] return resource_name def format_image_url(fife_url: str, width: int = 768, height: int = 1376) -> str: """ Format FIFE URL with specific dimensions Args: fife_url: Base FIFE URL width: Desired width height: Desired height Returns: Formatted URL with dimensions """ if "=" not in fife_url: return f"{fife_url}=w{width}-h{height}" return fife_url 11. flow_api/client.py python """ Main client for Google Labs Flow API """ import requests import time from typing import List, Optional, Dict, Any from .models import ( ImageGenerationRequest, ImageGenerationResponse, VideoGenerationRequest, VideoGenerationResponse, BatchImageGenerationRequest, BatchVideoGenerationRequest, ClientContext, RecaptchaContext, ImageInput, ReferenceImage, CreditsResponse, AspectRatioEnum, ImageModelEnum, VideoModelEnum, ImageInputTypeEnum, ) from .constants import ( ENDPOINTS, API_KEY, RECAPTCHA_SITE_KEY, TOOL_PINHOLE, VIDEO_POLL_INTERVAL, VIDEO_TIMEOUT, ) from .exceptions import ( FlowAPIError, AuthenticationError, RateLimitError, InsufficientCreditsError, VideoGenerationError, ) from .utils import generate_seed, retry_with_backoff class FlowClient: """ Client for interacting with Google Labs Flow API This client provides methods for: - Text-to-Image generation - Image-to-Image transformation - Image-to-Video animation - Credit management """ def __init__( self, project_id: str, session_token: str, session_id: Optional[str] = None, recaptcha_token: Optional[str] = None, api_key: str = API_KEY, ): """ Initialize Flow API client Args: project_id: Your Google Labs project ID session_token: Session authentication token from cookies session_id: Optional session ID (will be generated if not provided) recaptcha_token: reCAPTCHA token (required for requests) api_key: API key for secondary endpoints """ self.project_id = project_id self.session_token = session_token self.session_id = session_id or self._generate_session_id() self.recaptcha_token = recaptcha_token self.api_key = api_key self.session = requests.Session() self.session.headers.update({ "Authorization": f"Bearer {self.session_token}", "Content-Type": "application/json", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36", }) def _generate_session_id(self) -> str: """Generate a random session ID""" import uuid return str(uuid.uuid4()) def _get_client_context(self) -> ClientContext: """Build client context for requests""" if not self.recaptcha_token: raise AuthenticationError( "reCAPTCHA token is required. Please provide a valid token." ) return ClientContext( project_id=self.project_id, recaptcha_context=RecaptchaContext(token=self.recaptcha_token), session_id=self.session_id, tool=TOOL_PINHOLE, ) def _make_request( self, method: str, url: str, json_data: Optional[Dict] = None, params: Optional[Dict] = None, ) -> Dict[str, Any]: """ Make HTTP request with error handling Args: method: HTTP method (GET, POST) url: Request URL json_data: JSON payload params: Query parameters Returns: Response JSON Raises: FlowAPIError: On API errors """ try: response = self.session.request( method=method, url=url, json=json_data, params=params, timeout=30, ) # Check for HTTP errors if response.status_code == 401: raise AuthenticationError("Invalid session token") elif response.status_code == 429: raise RateLimitError("Rate limit exceeded") elif response.status_code >= 400: raise FlowAPIError( f"API request failed with status {response.status_code}: {response.text}" ) return response.json() except requests.RequestException as e: raise FlowAPIError(f"Request failed: {str(e)}") def generate_image( self, prompt: str, model: str = ImageModelEnum.GEM_PIX_2, aspect_ratio: str = AspectRatioEnum.PORTRAIT, seed: Optional[int] = None, ) -> ImageGenerationResponse: """ Generate image from text prompt (Text-to-Image) Args: prompt: Text description of the image model: Image generation model aspect_ratio: Image aspect ratio (portrait/landscape) seed: Random seed for reproducibility Returns: ImageGenerationResponse with generated image Example: >>> client = FlowClient(...) >>> result = client.generate_image( ... prompt="A cyberpunk mermaid in neon ocean", ... model="GEM_PIX_2" ... ) >>> print(result.image_url) """ client_context = self._get_client_context() request = ImageGenerationRequest( client_context=client_context, image_aspect_ratio=aspect_ratio, image_inputs=[], # Empty for text-to-image image_model_name=model, prompt=prompt, seed=seed or generate_seed(), ) batch_request = BatchImageGenerationRequest( client_context=client_context, requests=[request], ) url = ENDPOINTS["generate_images"].format(project_id=self.project_id) response_data = self._make_request( method="POST", url=url, json_data=batch_request.model_dump(by_alias=True, exclude_none=True), ) return ImageGenerationResponse(**response_data) def generate_image_from_reference( self, prompt: str, reference_image_id: str, model: str = ImageModelEnum.GEM_PIX_2, aspect_ratio: str = AspectRatioEnum.PORTRAIT, seed: Optional[int] = None, ) -> ImageGenerationResponse: """ Generate image from reference (Image-to-Image) Args: prompt: Transformation description reference_image_id: ID of reference image model: Image generation model aspect_ratio: Image aspect ratio seed: Random seed Returns: ImageGenerationResponse with transformed image Example: >>> result = client.generate_image_from_reference( ... prompt="Transform into golden steampunk version", ... reference_image_id="media-id-here" ... ) """ client_context = self._get_client_context() image_input = ImageInput( image_input_type=ImageInputTypeEnum.REFERENCE, name=reference_image_id, ) request = ImageGenerationRequest( client_context=client_context, image_aspect_ratio=aspect_ratio, image_inputs=[image_input], image_model_name=model, prompt=prompt, seed=seed or generate_seed(), ) batch_request = BatchImageGenerationRequest( client_context=client_context, requests=[request], ) url = ENDPOINTS["generate_images"].format(project_id=self.project_id) response_data = self._make_request( method="POST", url=url, json_data=batch_request.model_dump(by_alias=True, exclude_none=True), ) return ImageGenerationResponse(**response_data) def generate_video( self, prompt: str, reference_images: List[Dict[str, str]], model: str = VideoModelEnum.VEO_3_1_FAST_PORTRAIT, seed: Optional[int] = None, is_looped: bool = False, ) -> VideoGenerationResponse: """ Generate video from images (Image-to-Video) Args: prompt: Animation description reference_images: List of dicts with 'image_id' and 'fife_url' model: Video generation model seed: Random seed is_looped: Whether video should loop Returns: VideoGenerationResponse with operation ID Example: >>> result = client.generate_video( ... prompt="The dragon flaps its wings", ... reference_images=[ ... {"image_id": "id1", "fife_url": "url1"}, ... {"image_id": "id2", "fife_url": "url2"} ... ] ... ) >>> video_url = client.wait_for_video(result.operation_id) """ client_context = self._get_client_context() refs = [ ReferenceImage(image_id=img["image_id"], fife_url=img["fife_url"]) for img in reference_images ] request = VideoGenerationRequest( client_context=client_context, prompt=prompt, reference_images=refs, model=model, seed=seed or generate_seed(), is_looped=is_looped, ) batch_request = BatchVideoGenerationRequest(requests=[request]) url = ENDPOINTS["generate_video"] response_data = self._make_request( method="POST", url=url, json_data=batch_request.model_dump(by_alias=True, exclude_none=True), ) return VideoGenerationResponse(**response_data) def check_video_status(self, operation_id: str, scene_id: str) -> VideoGenerationResponse: """ Check status of video generation Args: operation_id: Operation ID from generate_video scene_id: Scene ID from generate_video Returns: VideoGenerationResponse with current status """ url = ENDPOINTS["check_video_status"] payload = { "operations": [ { "operation": {"name": operation_id}, "sceneId": scene_id, "status": "MEDIA_GENERATION_STATUS_ACTIVE", } ] } response_data = self._make_request(method="POST", url=url, json_data=payload) return VideoGenerationResponse(**response_data) def wait_for_video( self, operation_id: str, scene_id: str, poll_interval: int = VIDEO_POLL_INTERVAL, timeout: int = VIDEO_TIMEOUT, ) -> str: """ Wait for video generation to complete Args: operation_id: Operation ID scene_id: Scene ID poll_interval: Seconds between status checks timeout: Maximum wait time in seconds Returns: Video URL when complete Raises: VideoGenerationError: If generation fails or times out """ start_time = time.time() while time.time() - start_time < timeout: status = self.check_video_status(operation_id, scene_id) if status.is_complete: return status.video_url time.sleep(poll_interval) raise VideoGenerationError( f"Video generation timed out after {timeout} seconds" ) def get_credits(self) -> CreditsResponse: """ Get remaining credits Returns: CreditsResponse with credit information """ url = f"{ENDPOINTS['credits']}?key={self.api_key}" response_data = self._make_request(method="GET", url=url) return CreditsResponse(**response_data) 12. examples/text_to_image.py python """ Example: Text-to-Image generation """ import os from dotenv import load_dotenv from flow_api import FlowClient, ImageModel, AspectRatio # Load environment variables load_dotenv() def main(): # Initialize client client = FlowClient( project_id=os.getenv("PROJECT_ID"), session_token=os.getenv("SESSION_TOKEN"), session_id=os.getenv("SESSION_ID"), recaptcha_token=os.getenv("RECAPTCHA_TOKEN"), ) # Generate image from text print("Generating image from text...") result = client.generate_image( prompt="A cyberpunk mermaid swimming in a bioluminescent neon ocean", model=ImageModel.GEM_PIX_2, aspect_ratio=AspectRatio.PORTRAIT, ) print(f"✅ Image generated successfully!") print(f"📷 URL: {result.image_url}") print(f"🆔 Media ID: {result.media_id}") print(f"🌱 Seed: {result.media[0].image.generated_image.seed}") # Check credits credits = client.get_credits() print(f"💳 Remaining credits: {credits.remaining_credits}") if __name__ == "__main__": main() 13. examples/image_to_image.py python """ Example: Image-to-Image transformation """ import os from dotenv import load_dotenv from flow_api import FlowClient, ImageModel load_dotenv() def main(): client = FlowClient( project_id=os.getenv("PROJECT_ID"), session_token=os.getenv("SESSION_TOKEN"), recaptcha_token=os.getenv("RECAPTCHA_TOKEN"), ) # First, generate a base image print("Step 1: Generating base image...") base_result = client.generate_image( prompt="A robot playing the violin", model=ImageModel.GEM_PIX_2, ) base_media_id = base_result.media_id print(f"✅ Base image created: {base_media_id}") # Transform the image print("\\nStep 2: Transforming image...") transformed_result = client.generate_image_from_reference( prompt="Transform this robot into a golden steampunk version", reference_image_id=base_media_id, model=ImageModel.GEM_PIX_2, ) print(f"✅ Image transformed!") print(f"📷 Original: {base_result.image_url}") print(f"🎨 Transformed: {transformed_result.image_url}") # Check credits credits = client.get_credits() print(f"\\n💳 Remaining credits: {credits.remaining_credits}") if __name__ == "__main__": main() 14. examples/image_to_video.py python """ Example: Image-to-Video generation """ import os from dotenv import load_dotenv from flow_api import FlowClient, ImageModel, VideoModel load_dotenv() def main(): client = FlowClient( project_id=os.getenv("PROJECT_ID"), session_token=os.getenv("SESSION_TOKEN"), recaptcha_token=os.getenv("RECAPTCHA_TOKEN"), ) # Generate images first print("Step 1: Generating reference images...") image1 = client.generate_image(prompt="A dragon flying over mountains") image2 = client.generate_image(prompt="A robot playing violin") print(f"✅ Generated {len([image1, image2])} images") # Create video from images print("\\nStep 2: Generating video (this may take 30-60 seconds)...") reference_images = [ { "image_id": image1.media[0].name.split("/")[-1], "fife_url": image1.image_url, }, { "image_id": image2.media[0].name.split("/")[-1], "fife_url": image2.image_url, }, ] video_result = client.generate_video( prompt="The dragon flaps its wings majestically while flying", reference_images=reference_images, model=VideoModel.VEO_3_1_FAST_PORTRAIT, ) operation_id = video_result.operation_id scene_id = video_result.operations[0].scene_id print(f"🎬 Video generation started (Operation ID: {operation_id})") # Wait for completion print("⏳ Waiting for video to complete...") video_url = client.wait_for_video(operation_id, scene_id) print(f"\\n✅ Video generated successfully!") print(f"🎥 URL: {video_url}") # Check credits (video costs ~40 credits) credits = client.get_credits() print(f"💳 Remaining credits: {credits.remaining_credits}") if __name__ == "__main__": main() 15. examples/batch_generation.py python """ Example: Batch generation of multiple images """ import os from dotenv import load_dotenv from flow_api import FlowClient, ImageModel load_dotenv() def main(): client = FlowClient( project_id=os.getenv("PROJECT_ID"), session_token=os.getenv("SESSION_TOKEN"), recaptcha_token=os.getenv("RECAPTCHA_TOKEN"), ) prompts = [ "A steampunk airship in the clouds", "A neon city at night with flying cars", "A magical forest with glowing mushrooms", "A futuristic space station orbiting Earth", ] print(f"Generating {len(prompts)} images...") results = [] for i, prompt in enumerate(prompts, 1): print(f"\\n[{i}/{len(prompts)}] Generating: {prompt[:50]}...") result = client.generate_image( prompt=prompt, model=ImageModel.GEM_PIX_2, ) results.append(result) print(f" ✅ Done: {result.image_url}") print(f"\\n🎉 All {len(results)} images generated!") # Show credits credits = client.get_credits() print(f"💳 Remaining credits: {credits.remaining_credits}") if __name__ == "__main__": main() Quer que eu continue com a documentação (docs/) e os testes (tests/)? Ou prefere que eu crie um script para gerar o ZIP final?