How we built the Storybook Day 3D animation
Storybook just hit a major milestone: version 7.0! With a re-engineered codebase and bunch of new features, it’s faster and more stable than ever before. To celebrate, the team hosted their first ever user conference—Storybook Day. And to make things even more special, we decided to add a visually stunning 3D element to the event landing page.
We used React Three Fiber (R3F) to build an eye-catching 3D illustration, inspired by Storybook’s Tetris blocks branding. In this article, we’ll dive into the nitty-gritty of shipping a 3D scene. Here’s what I’ll cover:
- 🏗️ Avoid object overlap with sphere packing
- 🧱 Model Tetris blocks with extrusion
- 🎥 Enhance visuals with effects like depth of field and shadows
- 🏎️ Optimize performance by reducing material count
ℹ️ This post assumes foundational knowledge of React Three Fiber. If you're new to 3D or not familiar with the R3F API, check out my intro post for a primer.
Our strategy
We built the event site using NextJS and @react-three/fiber, with a little help from @react-three/drei. The animation features floating Tetris blocks surrounding “7.0” extruded text.
Here’s an MVP in code.
The Details
On to the fun stuff. Notice how the blocks are randomly spread across the scene and sometimes overlap with the text or each other. It would be aesthetically more pleasing if the blocks had no overlap. That’s one example of little adjustments we made to take this scene from MVP to production ready. Let’s dive into these techniques.
Sphere packing to place blocks
The pack-sphere library enabled us to evenly distribute the blocks and prevent any potential overlapping issues. This library employs a brute force approach to arrange spheres of varying radii within a cube.
const spheres = pack({
maxCount: 40,
minRadius: 0.125,
maxRadius: 0.25,
});
We then scaled the spheres to fit our scene space and stretched them horizontally along the x-axis. Lastly, we placed a block at the center of each sphere, scaled to the sphere’s radius.
That gave us a nice spread of blocks, with a satisfying mix of size and placement.
To fix the overlap between the “7.0” text and the blocks, a different approach was needed. Initially, we thought of using pack-sphere to detect collisions between the spheres and the text geometry. However, we ended up opting for a simpler solution: slightly shifting the spheres along the z-axis.
The text essentially parts the sea of blocks.
Here’s the entire process, in code:
// Sphere packing
const spheres = pack({
maxCount: 40,
minRadius: 0.125,
maxRadius: 0.25,
}).map((sphere) => {
const inFront = sphere.position[2] >= 0;
return {
...sphere,
position: [
sphere.position[0],
sphere.position[1],
// offset to avoid overlapping with the 7.0 text
inFront ? sphere.position[2] + 0.6 : sphere.position[2] - 0.6,
],
};
});
const size = 5.5;
// stretch horizontally
const scale = [size * 6, size, size];
const blocks = spheres.map((sphere, index) => ({
...sphere,
id: index,
// scale position to scene space
position: sphere.position.map((v, idx) => v * scale[idx]),
// scale radius to scene space
size: sphere.radius * size * 1.5,
color: Random.pick(colors),
type: Random.pick(blockTypes),
rotation: new THREE.Quaternion(...Random.quaternion()),
}));
Extrusion to model tetris blocks
You may have noticed that we’ve only used primitive blocks so far. Hey, no hating on primitives, but Storybook branding uses tetris-style blocks, so we had to add those into the mix.
The concept of ExtrudeGeometry in Three.js is quite interesting. You can supply it with a 2D shape using a syntax similar to SVG path or CSS shapes, and it’ll extrude it along the z-axis. This feature is perfect for creating Tetris blocks.
Drei’s Extrude utility offers a relatively straightforward syntax for creating such shapes. Here’s an example of how we generated the “T” block:
import { useMemo } from 'react';
import * as THREE from 'three';
import { Extrude } from '@react-three/drei';
export const EXTRUDE_SETTINGS = {
steps: 2,
depth: 0.5625,
bevelEnabled: false,
};
export const TBlock = ({ type, color, ...props }) => {
const shape = useMemo(() => {
const _shape = new THREE.Shape();
_shape.moveTo(0, 0);
_shape.lineTo(SIDE, 0);
_shape.lineTo(SIDE, SIDE * 3);
_shape.lineTo(0, SIDE * 3);
_shape.lineTo(0, SIDE * 2);
_shape.lineTo(-SIDE, SIDE * 2);
_shape.lineTo(-SIDE, SIDE);
_shape.lineTo(0, SIDE);
return _shape;
}, []);
return (
<Extrude args={[shape, EXTRUDE_SETTINGS]} {...props}>
<meshPhongMaterial color={color} />
</Extrude>
);
};
Next, we layered on shadows and depth of field effect to add some oomph.
Shadows
Shadows bring scenes to life by adding depth and realism. You can configure lights and meshes within the scene to cast shadows using the castShadow
prop. However, we were looking for a softer look, so once again we reached for Drei, which offers a convenient contact shadows component.
Contact shadows are a “fake shadow” effect. They’re generated by filming the scene from below and rendering the shadow onto a catcher plane. The shadow is accumulated over several frames, making it softer and more realistic.
To use it, add the ContactShadows
component to the scene. Then customize the look by adjusting the resolution, opacity, blur, color, and other properties.
Depth of field effect
At this stage, every object in the scene is rendered with the same sharpness, making the scene appear somewhat flat. Photographers, often use wide apertures and shallow depth of field to create a pleasing blurred aesthetic. We mimicked this effect by applying post-processing effects—using @react-three/postprocessing—to our scene, giving it a more cinematic feel.
The EffectComposer manages and runs post-processing passes. It begins by rendering the scene to a buffer and then applies filters and effects one at a time before rendering the final image onto the screen.
Picking a focus distance
With the DepthOfField effect, you can pinpoint a specific distance (focusDistance
) within your scene and make everything else beautifully blurry. But how do you define the focus distance? Is it measured in world units or something else?
import { Canvas } from '@react-three/fiber';
import { EffectComposer, DepthOfField } from '@react-three/postprocessing';
export const Scene = () => (
<Canvas>
{/* Rest of Our scene */}
<EffectComposer multisampling={8}>
<DepthOfField focusDistance={0.5} bokehScale={7} focalLength={0.2} />
</EffectComposer>
</Canvas>
);
The camera’s view is defined by a pyramid-shaped volume called the “view frustum.” Objects within the minimum (near plane) and maximum (far plane) distances from the camera will be rendered.
The focusDistance
parameter determines the distance from the camera at which objects are considered in focus. Its value is normalized between 0 and 1, where 0 represents the near plane of the camera and 1 represents the far plane.
We set the focus distance to 0.5, which is where the text is located. Objects closer to that threshold will be in focus, while those farther away will be blurred. Try adjusting the focusDistance
slider in the sandbox below to see how it affects the scene.
You can also further customize this effect by adjusting the bokehScale
and focalLength
props.
Using a material store for performance optimization
Although shadows and depth of field are cool visual effects, they can be quite expensive to render and have a significant impact on performance. Seeking some advice on this topic, I reached out to Paul Henschel aka @0xca0a. Among several perf optimizations, they suggested using a material store to avoid creating a new material instance for each block. Let’s take a closer look at how this works.
The Block component uses the color
prop to create a unique material for every instance. For example, every orange coloured block will have its own material instance. Pretty wasteful, right?
export const Block = ({ type, color }: BlockProps) => {
const Component = BLOCK_TYPES[type].shape
return (
<Component args={BLOCK_TYPES[type].args as any} castShadow>
<meshPhongMaterial color={color} />
</Component>
)
}
By using a material store, we could reuse the same material across multiple block instances. This improved performance by reducing the number of materials that need to be created and rendered.
import * as THREE from 'three';
THREE.ColorManagement.legacyMode = false;
const colors: string[] = [
'#FC521F',
'#CA90FF',
'#1EA7FD',
'#FFAE00',
'#37D5D3',
'#FC521F',
'#66BF3C',
'#0AB94F'
];
interface Materials {
[color: string]: THREE.MeshPhongMaterial;
}
const materials: Materials = colors.reduce(
(acc, color) => ({ ...acc, [color]: new THREE.MeshPhongMaterial({ color }) }),
{}
);
export { colors, materials };
The store generates a material for every possible block color and stores it in a plain object. Now, instead of creating a material for each instance, the block component simply references it from the material store. We went from 40 materials to 8.
export const Block = ({ type, color }: BlockProps) => {
const Component = BLOCK_TYPES[type].shape;
return (
<Component
args={OTHER_TYPES[type as OtherBlockType].args as any}
material={materials[color]}
/>
);
}
Wrap up
3D is now part of the web’s grain, and R3F is a fantastic tool for intertwining HTML and WebGL. The R3F ecosystem is rich, and libraries like drei and postprocessing simplify complex 3D tasks. The Storybook Day 3D scene perfectly showcases the platform’s possibilities. We used sphere packing, extrusion, shadows, depth of field, and a material store to create an memorable event landing page.
In deconstructing this scene, I aimed to shed light on the final 20% of work that elevates a scene from “done” to “ready to ship”. If you have any questions or feedback, feel free to reach out via twitter.
Want to dive in deeper? The code is open-source and each aspect of the scene is broken down in the deployed Storybook.