path-tracing

安装量: 35
排名: #19535

安装

npx skills add https://github.com/letta-ai/skills --skill path-tracing

Path Tracing and Ray Tracing Implementation This skill provides guidance for implementing path tracers and ray tracers, particularly for image reconstruction tasks where a target image must be matched within a similarity threshold. When to Use This Skill Implementing ray tracers or path tracers in C/C++ Reconstructing images by reverse-engineering scene parameters Building rendering systems with geometric primitives (spheres, planes) Tasks requiring image similarity matching (L2 norm, cosine similarity) Rendering scenes with shadows, reflections, or procedural textures Workflow: Baseline-First Development Phase 1: Establish a Working Baseline Before any optimization or parameter tuning, establish a complete working render: Start with minimal samples - Use S=1 or S=2 to verify the pipeline produces complete output Verify output file completeness - Check that the output file contains valid, complete image data before proceeding Test in target environment early - If the task specifies a chroot jail, sandbox, or specific execution environment, test there immediately Fix all compiler warnings first - Treat warnings as errors; typos like doubled instead of double d cause undefined behavior Phase 2: Scene Analysis When reconstructing from a reference image: Read and parse image header - Extract dimensions, format, color depth Sample key pixels systematically - Sample corners, center, and regions of interest Identify scene elements - Count all objects (spheres, planes, lights) before implementing Analyze gradients and patterns - Sample multiple points to derive mathematical relationships for sky gradients, floor patterns Document derived parameters - Write down calculated values (camera FOV, sphere position/radius, light direction) Phase 3: Incremental Implementation One feature at a time - Add sphere, then floor, then shadows, then soft shadows Validate each addition - Render and compare after each feature Calculate render time before choosing sample count - For a 2400×1800 image at 50 samples, estimate: pixels × samples × rays_per_sample × time_per_ray Never modify code while renders are running - This creates race conditions and confusion about which version produced which output Phase 4: Validation Use the exact similarity metric - If grading uses normalized L2 or cosine similarity, compute that metric, not RMS error or other proxies Create a reusable validation script early - Avoid rewriting comparison code repeatedly Verify output file before submission - Check file size, parse the image, confirm dimensions match expected Common Scene Elements Sky Gradients Analyze by sampling multiple y-coordinates at a fixed x to derive the vertical gradient formula. Sample multiple x-coordinates at a fixed y to check for horizontal variation. Checkered Floor To match checker patterns: Sample points across the floor to determine checker scale Identify the checker color values precisely Verify pattern origin and orientation Spheres Derive sphere parameters by: Finding the visual center of the sphere in image coordinates Estimating radius from the sphere's apparent size Calculating reflection and shadow geometry to verify position Shadows Hard shadows: Single ray to light source Soft shadows: Multiple samples with jittered light directions Verify shadow direction matches light position Time Management Calculate Expected Render Time First render_time ≈ (width × height × samples × bounces) / rays_per_second Typical ray tracer performance: 100K-1M rays/second depending on scene complexity. For a 2400×1800 image: S=1: ~4M rays, seconds to complete S=10: ~40M rays, minutes to complete S=50: ~200M rays, could take 10+ minutes S=100: ~400M rays, may exceed time limits Adjust Parameters for Time Budget If the task has a time limit: Calculate maximum feasible sample count Start with that limit, not above Consider rendering at lower resolution first for validation Common Pitfalls to Avoid Code Quality Issues Typos in type declarations - doubled vs double d causes undefined behavior Ignoring compiler warnings - Fix all warnings before running long processes Race conditions - Never edit code while a render is still running Validation Mistakes Wrong similarity metric - Match the exact metric used for grading Incomplete output files - Always verify file completeness before considering done Downsampled validation - If final output must be full resolution, validate at full resolution Time Management Mistakes Starting with high sample counts - Always start low (S=1) to verify correctness Not calculating expected render time - Lead to timeout and wasted iterations Iterating without completing - Better to have one complete low-quality render than many incomplete high-quality attempts Analysis Mistakes Missing scene elements - Thoroughly identify all objects before implementing Arbitrary parameter guessing - Derive parameters mathematically from reference image samples Sign errors in gradients - Double-check gradient direction by sampling multiple points Verification Checklist Before considering the task complete: Code compiles without warnings Output file exists and is complete (correct size, valid format) Output dimensions match expected dimensions Similarity metric meets the required threshold Tested in the target execution environment All scene elements from reference are present in output

返回排行榜