Results

Optimization Report

Best solution, seed results, objective breakdown, and optimization configuration

Optimization Report is the engineering summary of one optimization run. It is used to judge completion status, credibility of the reported optimum, and whether that optimum should be applied back to the current model.

Analysis Entry

Current results pageMethod page
optimization on R / T / A targetsRTA and Layer Absorption Analysis
optimizer position inside the full workflowOverview
post-optimization validation of local fields and local absorptionDepth Detector Analysis

Result Structure

The Optimization Report page contains the following modules (some appear only after a grid-mode run):

ModuleDisplay conditionPurpose
OverviewAlways shownQuick summary of best fitness, runtime, evaluation count, and algorithm info
Grid SummaryGrid mode onlyStatistics from the grid search phase
Seed ResultsGrid mode onlyLocal optimization result for each seed, with per-seed apply
Best SolutionAlways shownGlobal best variable values, with apply-to-structure support
EvaluationsAlways shownPer-evaluation parameter combinations and improvement records
Objective BreakdownMulti-objective runsIndividual score for each objective
Optimization ConfigurationAlways shownActual algorithm and parameter configuration used in this run

Overview

The Overview section displays four summary cards: Best Fitness, Execution Time, Evaluations (completed/budget), and Algorithm (type and termination status).

Overview is the first filter layer. Four cards provide a quick assessment of whether the run is worth deeper analysis.

MetricMeaning
Best FitnessGlobal best fitness value
Execution TimeTotal optimization runtime
EvaluationsCompleted evaluations / total evaluation budget (e.g. 150 / 200)
AlgorithmAlgorithm name (TRF / L-BFGS-B / Nelder-Mead) and termination status

Priority checks:

  1. Whether Best Fitness is meaningfully better than the starting design.
  2. Whether Evaluations consumed the expected budget.
  3. Whether the algorithm termination status is completed.

Grid Summary

Appears only after a grid-mode run. Shows key statistics from the grid search phase:

FieldMeaning
Total Grid PointsTotal grid sample count
Completed Grid SamplesActual number of grid evaluations completed
Requested Top K SeedsUser-configured seed count
Selected Seed CountActual number of seeds that entered local optimization

If Selected Seed Count is less than Requested Top K Seeds, there were typically not enough feasible grid points.

Seed Results

Appears only after a grid-mode run. Lists the local optimization result for each seed.

ColumnMeaning
SeedSeed label (e.g. Seed 1, Seed 2)
Grid RankRank of this seed in the grid evaluation
Grid ScoreFitness of this seed in the grid phase
Best FitnessBest fitness after local optimization
EvaluationsNumber of local evaluations consumed by this seed
StatusTermination status of the seed
Variable columnsBest variable values found by this seed
Apply to StructureWrite this seed's best solution back to the structure

The global best seed is highlighted in green. Each seed can be independently applied to the structure.

Best Solution

Best Solution lists the global best variable paths and values. This is the most direct engineering output.

Parameter names use the fully expanded path form (e.g. structure.ITO.thickness) and should map directly back to variable paths configured in Optimizer.

The Apply to Structure button writes the best variable values back into the live model. Before using it, confirm two things:

  1. The proposed values are still manufacturable.
  2. You are prepared to run a normal forward calculation and verify the physical response.

Evaluations

Evaluations records the step-by-step evaluation history during local optimization.

ColumnMeaning
EvaluationEvaluation sequence number
TypeType label (evaluation = regular evaluation, improved = a new best was found)
Variable columnsParameter combination at this evaluation point

In grid mode, a dropdown at the top allows filtering by seed. The last improved entry is highlighted in green, marking the final improvement point in the run.

This table is primarily used to judge whether the search converged into a credible region. If parameters remain widely scattered across the full range, the target landscape may be too flat, the budget may be too small, or the chosen variable may not be sensitive enough.

Objective Breakdown

Appears in multi-objective runs. Lists each objective's individual score at the best solution:

ColumnMeaning
ObjectiveObjective label
TargetTarget quantity
ScoreScore for this objective

Useful for identifying conflicts between objectives. If one objective's score is significantly worse, consider adjusting its weight or target value.

Optimization Configuration

Records the actual algorithm configuration used in this run.

The table currently shows:

  • Job ID: run task identifier
  • Status: termination status
  • Max Evaluations or Max Local Evaluations Per Seed (depending on whether grid mode was used)
  • Algorithm-specific parameters (e.g. stepRatio, ftol, xtol, etc.)
  • Grid parameters (e.g. enabled, samplesPerVariable, topKSeeds)

Review Flow

To judge whether an optimization run is worth keeping:

  1. Overview: decide whether the run merits deeper analysis.
  2. Grid Summary / Seed Results (if present): confirm grid coverage and seed count.
  3. Best Solution: identify the reported optimum.
  4. Evaluations: confirm the search actually converged.
  5. Objective Breakdown (if present): confirm scores are balanced across objectives.
  6. Optimization Configuration: confirm the real search workload and budget.
  7. If needed, use Apply to Structure, return to the physics result pages, and rerun.

Common Errors and Checks

No data on the page

  1. Whether Run Optimizer was actually executed.
  2. Whether the optimizer configuration passed validation.
  3. Whether at least one objective and one variable remain enabled.

Optimization failure in report

The page surfaces failure states directly. Check:

  1. Whether variable bounds are valid (min < max).
  2. Whether variable paths still point to existing model items.
  3. Whether objective settings conflict with the current model.

Unstable optimum

  1. Click Apply to Structure.
  2. Rerun a normal forward calculation.
  3. Verify that the physical result pages still support the optimization claim.

Only a solution that remains valid after a forward rerun is a design candidate worth carrying forward.

Next Step

If the current task is to place the best solution back into an application workflow, continue with RTA and Layer Absorption Analysis or Overview, following the sequence "result confirmation -> sweep -> optimization -> result back-check".

Copyright © 2026 Dreapex