<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://orbitalmaneuvers.github.io/Worlds/feed.xml" rel="self" type="application/atom+xml" /><link href="https://orbitalmaneuvers.github.io/Worlds/" rel="alternate" type="text/html" /><updated>2026-03-29T14:46:49+00:00</updated><id>https://orbitalmaneuvers.github.io/Worlds/feed.xml</id><title type="html">Worlds Dev Blog</title><subtitle>Development notes for the Worlds Delphi project.</subtitle><entry><title type="html">Next Phase</title><link href="https://orbitalmaneuvers.github.io/Worlds/2026/03/29/agents.html" rel="alternate" type="text/html" title="Next Phase" /><published>2026-03-29T00:00:00+00:00</published><updated>2026-03-29T00:00:00+00:00</updated><id>https://orbitalmaneuvers.github.io/Worlds/2026/03/29/agents</id><content type="html" xml:base="https://orbitalmaneuvers.github.io/Worlds/2026/03/29/agents.html"><![CDATA[<h1 id="next-phase">Next Phase</h1>

<p>Today I finished some cleanup work on the environment visualizers. So the environment is far enough along to allow stepping and adding an agent.</p>

<p>Sometime today, some lucky generic agent will have its first thought.</p>

<p>As a reflection of its creator, no doubt that first thought will be “So … anything to eat around here?”</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Next Phase Today I finished some cleanup work on the environment visualizers. So the environment is far enough along to allow stepping and adding an agent. Sometime today, some lucky generic agent will have its first thought. As a reflection of its creator, no doubt that first thought will be “So … anything to eat around here?”]]></summary></entry><entry><title type="html">One Small Step</title><link href="https://orbitalmaneuvers.github.io/Worlds/2026/03/13/onestep.html" rel="alternate" type="text/html" title="One Small Step" /><published>2026-03-13T00:00:00+00:00</published><updated>2026-03-13T00:00:00+00:00</updated><id>https://orbitalmaneuvers.github.io/Worlds/2026/03/13/onestep</id><content type="html" xml:base="https://orbitalmaneuvers.github.io/Worlds/2026/03/13/onestep.html"><![CDATA[<h1 id="one-small-step-">One Small Step …</h1>

<p>… for mankind, one giant leap for me: the environment sim can step.</p>

<p>What works today:</p>

<ul>
  <li>defining foods</li>
  <li>defining biomes, their params and what foods they grow</li>
  <li>region definition by drawing a biome map</li>
  <li>upscaling a region into sim data from 32x32 authored to 256x256 runtime
    <ul>
      <li>resource allocation tables</li>
      <li>biome smoothing at edges</li>
      <li>randomization across same-biome cells</li>
    </ul>
  </li>
  <li>day/night cycle has been established</li>
  <li>the sim has a clock and a tick event</li>
  <li>the environment is managed through entire day/night cycles</li>
  <li>natural resources grow and decay according to authored intentions</li>
  <li>the UI has rudimentary text logging and resource visualization</li>
</ul>

<p>New Additions:</p>
<ul>
  <li>World editor</li>
  <li>World upscaling</li>
  <li>Started: sim sessions</li>
  <li>Molecule Ratings</li>
</ul>

<h2 id="nights-and-biomass">Nights and Biomass</h2>

<p>Having a distinct night time with different environmental pressures, plus a resource like biomass, gives interesting opportunities for agent behaviors. The idea goes like this …</p>]]></content><author><name></name></author><summary type="html"><![CDATA[One Small Step … … for mankind, one giant leap for me: the environment sim can step. What works today: defining foods defining biomes, their params and what foods they grow region definition by drawing a biome map upscaling a region into sim data from 32x32 authored to 256x256 runtime resource allocation tables biome smoothing at edges randomization across same-biome cells day/night cycle has been established the sim has a clock and a tick event the environment is managed through entire day/night cycles natural resources grow and decay according to authored intentions the UI has rudimentary text logging and resource visualization New Additions: World editor World upscaling Started: sim sessions Molecule Ratings Nights and Biomass Having a distinct night time with different environmental pressures, plus a resource like biomass, gives interesting opportunities for agent behaviors. The idea goes like this …]]></summary></entry><entry><title type="html">The Layered Approach</title><link href="https://orbitalmaneuvers.github.io/Worlds/2026/03/05/layers.html" rel="alternate" type="text/html" title="The Layered Approach" /><published>2026-03-05T00:00:00+00:00</published><updated>2026-03-05T00:00:00+00:00</updated><id>https://orbitalmaneuvers.github.io/Worlds/2026/03/05/layers</id><content type="html" xml:base="https://orbitalmaneuvers.github.io/Worlds/2026/03/05/layers.html"><![CDATA[<h1 id="the-layered-approach">The Layered Approach</h1>

<p>After lots of starts and stops on this project I’ve finally managed to come to a design that feels workable, and it consists of 3 major layers:</p>

<h2 id="authoring-layer">Authoring Layer</h2>
<p>I wanted the authoring process to be friendly and “low resolution” so you don’t need to be a geologist, biologist, nor artist to create a world where interesting things can happen.</p>

<p>The authoring layer presents a simple framework for building the environment.</p>

<h3 id="designing-world-objects">Designing World Objects</h3>

<p><code class="language-plaintext highlighter-rouge">Molecules</code> - A relaxed idea of a building block of all natural resources. Currently there are 4 in the system, but only 3 occur naturally in the environment: <code class="language-plaintext highlighter-rouge">Alpha</code>, <code class="language-plaintext highlighter-rouge">Beta</code>, and <code class="language-plaintext highlighter-rouge">Gamma</code> and the 4th is <code class="language-plaintext highlighter-rouge">Biomass</code> but doesn’t participate in the authoring.</p>

<p><code class="language-plaintext highlighter-rouge">Foods</code> - Foods make up the naturally occurring resources in the world. Molecules are combined in unique combinations that present opportunities for differing absorption spectrums in agents, allowing for food pressures.</p>

<p>Here’s an early version of the food designer:</p>

<p><img src="/Worlds/assets/images/food-designer.png" alt="Screenshot of food designer" /></p>

<p><code class="language-plaintext highlighter-rouge">Biomes</code> - a biome is a set of parameters that can be applied to any location within a <code class="language-plaintext highlighter-rouge">region</code> to control various environmental aspects.</p>

<p>Here’s an early implementation of the biome designer:</p>

<p><img src="/Worlds/assets/images/biome-designer.png" alt="Screenshot of biome designer" /></p>

<p>And the Foods page of a biome indicates which foods should be allowed to grow within the biome.</p>

<p><img src="/Worlds/assets/images/biome-designer-foods.png" alt="Screenshot 2 of biome designer" /></p>

<p><code class="language-plaintext highlighter-rouge">Regions</code> - A region is a 32x32 “pixel” grid in which each cell can be painted with a particular biome.</p>

<p>Here’s a fully ratchet screenshot of the current ideas … you can draw with the mouse and select a color/biome to draw with. An “empty” cell (black) is treated as a default cell, without resources but with “Normal” for the other parameters. Adjustable someday…</p>

<p><img src="/Worlds/assets/images/region-designer.png" alt="Screenshot 2 of region designer" /></p>

<p><code class="language-plaintext highlighter-rouge">Worlds</code> - No editors for this yet, but this will be a 3x3 grid in which you can assign from 1 to 9 regions to cells.</p>

<h2 id="upscaling-layer">Upscaling Layer</h2>
<p>The scale presented by the authoring tools is purposefully low-resolution, but that data will pass through the upscaling layer. This layer is the bridge between the friendly editing concepts like “good” and “better” and the higher resolution “floats everywhere” data for the sim.</p>

<p>Some current thoughts about this layer, which does not exist yet:</p>
<ul>
  <li>We’ll use a scale factor of 8x, which results in a region of 256x256 cells (one editor “pixel” maps to 8 cells in the sim)</li>
  <li>The upscaler will use noise/randomization/etc to introduce variation and unpredictability.</li>
  <li>We’ll look into biome-border smoothing for a more natural result</li>
</ul>

<h2 id="simulation-layer">Simulation Layer</h2>

<p>Throughout human history, little has been known about this layer, and today we know even less.</p>

<p>We know <em>some</em> stuff …</p>

<p>I <em>think</em> I know that I need to go here next … and design this data … so that the upscaler has both an input and an output target.</p>

<p>We’ll see!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The Layered Approach After lots of starts and stops on this project I’ve finally managed to come to a design that feels workable, and it consists of 3 major layers: Authoring Layer I wanted the authoring process to be friendly and “low resolution” so you don’t need to be a geologist, biologist, nor artist to create a world where interesting things can happen. The authoring layer presents a simple framework for building the environment. Designing World Objects Molecules - A relaxed idea of a building block of all natural resources. Currently there are 4 in the system, but only 3 occur naturally in the environment: Alpha, Beta, and Gamma and the 4th is Biomass but doesn’t participate in the authoring. Foods - Foods make up the naturally occurring resources in the world. Molecules are combined in unique combinations that present opportunities for differing absorption spectrums in agents, allowing for food pressures. Here’s an early version of the food designer: Biomes - a biome is a set of parameters that can be applied to any location within a region to control various environmental aspects. Here’s an early implementation of the biome designer: And the Foods page of a biome indicates which foods should be allowed to grow within the biome. Regions - A region is a 32x32 “pixel” grid in which each cell can be painted with a particular biome. Here’s a fully ratchet screenshot of the current ideas … you can draw with the mouse and select a color/biome to draw with. An “empty” cell (black) is treated as a default cell, without resources but with “Normal” for the other parameters. Adjustable someday… Worlds - No editors for this yet, but this will be a 3x3 grid in which you can assign from 1 to 9 regions to cells. Upscaling Layer The scale presented by the authoring tools is purposefully low-resolution, but that data will pass through the upscaling layer. This layer is the bridge between the friendly editing concepts like “good” and “better” and the higher resolution “floats everywhere” data for the sim. Some current thoughts about this layer, which does not exist yet: We’ll use a scale factor of 8x, which results in a region of 256x256 cells (one editor “pixel” maps to 8 cells in the sim) The upscaler will use noise/randomization/etc to introduce variation and unpredictability. We’ll look into biome-border smoothing for a more natural result Simulation Layer Throughout human history, little has been known about this layer, and today we know even less. We know some stuff … I think I know that I need to go here next … and design this data … so that the upscaler has both an input and an output target. We’ll see!]]></summary></entry><entry><title type="html">The Worlds Project</title><link href="https://orbitalmaneuvers.github.io/Worlds/2025/12/15/welcome.html" rel="alternate" type="text/html" title="The Worlds Project" /><published>2025-12-15T00:00:00+00:00</published><updated>2025-12-15T00:00:00+00:00</updated><id>https://orbitalmaneuvers.github.io/Worlds/2025/12/15/welcome</id><content type="html" xml:base="https://orbitalmaneuvers.github.io/Worlds/2025/12/15/welcome.html"><![CDATA[<h1 id="intro">Intro</h1>

<p>This project began life as a question …</p>

<p><em>Can “emergent behaviors” be witnessed in something a desktop PC can do, or is that an “at scale”  phenomenon?</em></p>

<p>I started with “I want to see critters do unexpected things” and began the process of scaling that concept down to a plan I thought I could pull off, seeing as how this is a spare time project.</p>

<p>This project is also a chance to get more experience with utilizing AI and finding out what works for me.</p>

<p>That said, there’s no AI-generated code in this project. As a best practice for my workflow, I purposefully try to avoid telling models anything specific about my development tools or language, so that I can get what helps me the most: high-level concepts explained logically from a general coding perspective, so I understand the problem space better. I can turn that understanding into the code myself, but especially in this project, I needed much better understanding of what I was even trying to do.</p>

<p>The first phase was mostly with ChatGPT, discussing what’s needed to be able to witness something like emergent behaviors. Is it even possible for one dude to write something that would have any kind of interesting result? The assurances I got from those first conversations boiled down to this:</p>

<ul>
  <li>it’s not about scale</li>
  <li>it’s not about population size</li>
</ul>

<p>After spending some time doing proof of concept coding, I circled back to focus on the design. At this point, I switched to Copilot in VS Code, with a manual selection of GPT-5.2. VS Code allows me to at least try to be more organized, since copilot can read documents where I have my questions/prompts, and it can update design documents itself as we discuss things.</p>

<p>This workflow allowed me to plow through a lot of design discussions, generate a lot of notes, and have an informed perspective - <em>to a degree</em> - of the big players within the project.</p>

<h2 id="thus-we-begin">Thus, we begin.</h2>

<p>I consider this the real starting point for this project, and it’s not normal. After reaching a point in the design of both where stuff is happening, and how stuff happens, it became clear that without some form of visualization early on, I was going to be banging my head on the desk a lot. The <em>type</em> of coding I needed to do was a little unfamiliar and it needs to be done “right” <em>consistently</em> to produce the kind of results I want.</p>

<p>So, first … the UI and Geography.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Intro This project began life as a question … Can “emergent behaviors” be witnessed in something a desktop PC can do, or is that an “at scale” phenomenon? I started with “I want to see critters do unexpected things” and began the process of scaling that concept down to a plan I thought I could pull off, seeing as how this is a spare time project. This project is also a chance to get more experience with utilizing AI and finding out what works for me. That said, there’s no AI-generated code in this project. As a best practice for my workflow, I purposefully try to avoid telling models anything specific about my development tools or language, so that I can get what helps me the most: high-level concepts explained logically from a general coding perspective, so I understand the problem space better. I can turn that understanding into the code myself, but especially in this project, I needed much better understanding of what I was even trying to do. The first phase was mostly with ChatGPT, discussing what’s needed to be able to witness something like emergent behaviors. Is it even possible for one dude to write something that would have any kind of interesting result? The assurances I got from those first conversations boiled down to this: it’s not about scale it’s not about population size After spending some time doing proof of concept coding, I circled back to focus on the design. At this point, I switched to Copilot in VS Code, with a manual selection of GPT-5.2. VS Code allows me to at least try to be more organized, since copilot can read documents where I have my questions/prompts, and it can update design documents itself as we discuss things. This workflow allowed me to plow through a lot of design discussions, generate a lot of notes, and have an informed perspective - to a degree - of the big players within the project. Thus, we begin. I consider this the real starting point for this project, and it’s not normal. After reaching a point in the design of both where stuff is happening, and how stuff happens, it became clear that without some form of visualization early on, I was going to be banging my head on the desk a lot. The type of coding I needed to do was a little unfamiliar and it needs to be done “right” consistently to produce the kind of results I want. So, first … the UI and Geography.]]></summary></entry></feed>