This submit and the demos under are primarily based on an experimental characteristic that’s at present behind a flag, allow it at
chrome://flags/#canvas-draw-elementin any Chromium-based browser. If the flag is unavailable or the demos nonetheless don’t work, attempt Chrome Canary.
Lately, social media has been buzzing a few new proposal from the WICG that goals to render conventional HTML inside a Canvas, and I’ve to say, I’m fairly enthusiastic about it. I’ve been ready for one thing like this ever since I first got here throughout a tweet in regards to the concept again in 2024, so naturally I needed to dive in and see what it’s all about.

The Downside
For years, the net has drawn a tough line between two worlds: the structured, accessible richness of HTML and the uncooked, pixel-level management of canvas.
If you happen to needed accessible UI parts and the flexibleness of CSS, you stayed within the DOM. However in the event you wanted full management over rendering (whether or not for video games, 3D scenes, or customized shaders), you needed to swap to canvas.
The Proposal
The HTML-in-Canvas proposal goals to handle this limitation by enabling <canvas> to render actual HTML content material instantly, whereas preserving key DOM advantages like format, accessibility, and CSS styling.
The API introduces three important primitives:
- A
layoutsubtreeattribute that opts canvas kids into format - A
drawElementImage()technique that renders a toddler ingredient into the canvas - A
paintoccasion that fires each time a canvas little one modifications
Placing all of it collectively, the API appears to be like like this:
<canvas layoutsubtree id="supply">
<div id="content material">
{...content material}
</div>
</canvas>
const canvas = doc.getElementById("supply");
const content material = doc.getElementById("content material");
const ctx = canvas.getContext("2nd");
canvas.onpaint = () => {
ctx.reset();
ctx.drawElementImage(content material, 0, 0);
};
canvas.requestPaint();
For the time being, this characteristic is behind a flag. You may allow it at
chrome://flags/#canvas-draw-elementin any Chromium-based browser. If the flag doesn’t seem or the demos nonetheless don’t work after enabling it, attempt utilizing Chrome Canary.
For safety causes, the proposal imposes some limitations on what may be rendered throughout the canvas. That mentioned, these constraints are far much less restrictive than these of the alternate options talked about within the Workarounds part. I like to recommend reviewing the total specification, notably the privacy-preserving portray part.
Seeing It in Motion
After I began experimenting with the proposal, I started excited about what this might imply for the way forward for the net, not simply by way of fascinating results and interactions, but in addition the brand new sorts of use circumstances it may unlock. I ended up organizing these concepts into 4 broad classes.
1. The Fundamentals: Publish-processing
With simply these earlier snippets, your content material, accessible and styled with CSS, is rendered instantly right into a canvas. From there, you need to use that canvas as a texture wherever you want it, for instance as enter to a shader.
On this first demo, I reuse the canvas content material as a texture inside a set of shaders constructed with React Three Fiber and React Postprocessing.
Think about creating a ravishing hero part to your touchdown web page and having the ability to simply layer post-processing results on prime of it to make it much more spectacular, with out having to fret about search engine marketing or whether or not search engine crawlers can nonetheless learn the content material. The DOM continues to be there, the content material continues to be accessible to crawlers, it’s simply being rendered someplace else.
Notes & references
The fluid impact comes from https://github.com/whatisjery/react-fluid-distortion
The rain impact relies on a Shadertoy snippet.
The pixelated impact makes use of the built-in pixelation impact from React Postprocessing.
2. A Small Function
Not all the pieces must be a full-screen impact. And to be honest, with a few of these results we’re additionally undoing a part of the accessibility we simply gained (the pixelated impact could be a bit a lot 😅).
One use case I discover notably fascinating for HTML-in-Canvas is including small, delicate interactions to the UI, issues that have been beforehand exhausting (or almost unimaginable) to attain, whereas nonetheless sustaining a clear, high-performance interface. The objective is to introduce these wow results in particular interactions that seize the person’s consideration.
For example, I’ll mimic this vanish enter snippet created by Rauno, a bit of textual content that fades away once you press Enter.
The trick behind this snippet is that it makes use of a hidden canvas positioned completely on prime of the enter subject. When the person presses Enter, the canvas is revealed and the identical textual content is drawn onto it utilizing matching font types, making it seem as if the enter continues to be there. From that time on, it’s only a matter of manipulating the canvas pixels on every body.
With HTML-in-Canvas, we are able to obtain the identical outcome with out counting on the “hidden canvas” trick, for the reason that enter itself may be rendered instantly into the canvas.
This instance by Matt Rothenberg is one other nice demonstration of this sort of use case. The impact that seems once you click on “Submit” creates a delicate however impactful wow impact for the person.
3. Transitions
One other good use case for HTML-in-Canvas is making use of transition results between sections of content material or complete pages.
On this demo, I experimented with a curl impact, utilizing a Shadertoy snippet as a place to begin. Sure, the traditional iBook web page transition is now surprisingly simple to recreate on the internet.
Constructing on the identical concept, right here’s one other experiment the place the positioning’s content material is revealed in a particular manner because the person logs in.
4. Constructing 2D UIs in a 3D world
Constructing 2D person interfaces for 3D internet scenes is often fairly a tedious process, a minimum of for me. Typically talking, we don’t have the total energy of CSS relating to format (Flexbox, Grid), and even primary design options like field shadows or borders. Every little thing must be dealt with on the shader degree.
There are a number of totally different approaches we are able to take right here.
Let’s say we have now a scene with a 3D mannequin of a pc, and we wish to show one thing on its display screen. A easy texture, or perhaps a video, won’t be sufficient, so we resolve to construct a totally interactive interface.
If our stack is React + React Three Fiber, one widespread strategy is to make use of the HTML element from Drei, which lets us connect HTML content material to things within the scene.
Nevertheless, this doesn’t at all times give us the outcome we’re searching for. In lots of circumstances, we wish the interface to really feel actually embedded within the 3D world, not simply layered on prime of it, and we may wish to apply shader results to it. We bumped into this actual situation whereas constructing the arcade scene for the basement.studio web site.
On the time, we solved it utilizing uikit, an amazing library that gives accessible parts and format primitives for React Three Fiber. We rendered the UI right into a render goal, utilized a fraction shader to it, and used that because the display screen texture, ensuing on this lab.
However now, there’s a 3rd choice. Whereas uikit is highly effective, it’s nonetheless restricted in comparison with CSS. With HTML-in-Canvas, we are able to comply with the same strategy whereas leveraging the total energy of HTML and CSS.
In reality, the creator of Three.js has already been experimenting with this proposal. In launch 184, he launched HTMLTexture, a brand new texture class that renders stay HTML by way of this new browser API.
The implementation additionally features a new add-on known as InteractionManager, which for HTMLTexture computes a CSS matrix3d rework on every body, permitting the browser to deal with hit-testing, hover, focus, and enter natively, with out the necessity for raycasting or artificial occasions.
Thanks to those two new options in Three.js, the next demo was very simple to create.
The supply code for this demo appears to be like one thing like this:
import "./types.css";
import { Footer } from "@/parts/format/footer";
import { Header } from "@/parts/format/header";
import { GridBackground } from "@/parts/ui/grid-background";
import { ComputerScreen } from "./parts/computer-screen";
import { Scene } from "./parts/scene";
const BasicUI = () => (
<>
<Header />
<GridBackground className="bg-codrops" />
<ComputerScreen />
<Scene />
<Footer />
</>
);
export default BasicUI;
There are two key parts to concentrate to: <ComputerScreen /> and <Scene />.
<ComputerScreen /> is just the content material rendered inside the pc, written completely in HTML. We simply add an ID to its container so we are able to reference it later from <Scene />.
export const ComputerScreen = () => {
{...content material}
return (
<div id="computer_screen">
{...content material}
</div>
);
};
<Scene /> is the place the magic occurs. Under is the total element, with feedback on the important thing elements.
"use consumer";
import { ContactShadows, Float, OrbitControls, Stage } from "@react-three/drei";
import { Canvas, useFrame, useLoader, useThree } from "@react-three/fiber";
import { Suspense, useEffect, useRef } from "react";
import { HTMLTexture, Mesh, sort ShaderMaterial } from "three";
import { InteractionManager } from "three/addons/interplay/InteractionManager.js";
import { GLTFLoader } from "three/addons/loaders/GLTFLoader.js";
import { screenMaterial } from "./crt-effect";
sort ScreenMaterial = ShaderMaterial & null ;
const materials = screenMaterial as ScreenMaterial;
const Mac = () => {
const gltf = useLoader(GLTFLoader, "/mac.glb");
const { gl, digicam } = useThree();
const screenRef = useRef<Mesh>(null);
const interactions = useRef<InteractionManager | null>(null);
useEffect(() => {
// We retrieve the ingredient
const ingredient = doc.getElementById("computer_screen");
if (!ingredient) throw new Error("#computer_screen ingredient not discovered");
// We create a texture from that ingredient utilizing HTML-in-Canvas
const texture = new HTMLTexture(ingredient);
// Create an Interplay Supervisor to ahead pointer occasions from the 3D airplane to the DOM ingredient
interactions.present = new InteractionManager();
// We connect the feel to the pc display screen airplane
materials.uniforms.map.worth = texture;
materials.map = texture;
// Join the interplay supervisor to the renderer and digicam
interactions.present.join(gl, digicam);
// Register the display screen airplane mesh to obtain pointer occasions
if (screenRef.present) interactions.present.add(screenRef.present);
window.dispatchEvent(new Occasion("mac-canvas-ready"));
}, [gl, camera]);
useFrame(({ clock }) => {
materials.uniforms.uTime.worth = clock.elapsedTime;
interactions.present?.replace();
});
return (
<Float pace={2} rotationIntensity={0.1} floatIntensity={0.05}>
<primitive object={gltf.scene} />
<mesh
ref={screenRef}
place={[0, 0.102, 0.183]}
rotation={[(-Math.PI / 180) * 6.5, 0, 0]}
materials={materials}
>
<planeGeometry args={[562 * 0.00062, 408 * 0.00062]} />
</mesh>
</Float>
);
};
export const Scene = () => (
<important className="fastened inset-0 h-svh">
<Canvas
shadows
dpr={[1, 2]}
gl={{ antialias: true }}
digicam={{ place: [0.02, 0.01, 0.05], fov: 24, close to: 0.1, far: 100 }}
>
<Suspense fallback={null}>
<Stage
depth={0.5}
atmosphere="forest"
shadows={false}
adjustCamera={false}
>
<Mac />
</Stage>
</Suspense>
<ContactShadows
place={[0, -0.35, 0]}
opacity={0.5}
blur={2}
far={4}
decision={128}
/>
<OrbitControls
enableDamping
enablePan={false}
minDistance={2}
maxDistance={8}
minPolarAngle={Math.PI / 6}
maxPolarAngle={Math.PI / 2}
/>
</Canvas>
</important>
);
This strategy will make it a lot simpler to construct interfaces for internet video games, interactive experiences, and even VR/AR purposes utilizing WebXR.
Workarounds
If we don’t wish to look ahead to the proposal to be absolutely carried out and broadly supported throughout browsers, there are at present a number of alternate options for attaining this sort of habits.
On one hand, libraries like html2canvas try to emulate CSS properties instantly in a canvas. It’s an fascinating workaround, however removed from excellent. Because the documentation itself states: Since every CSS property must be manually coded to render appropriately, html2canvas won’t ever have full CSS help. The library tries to help essentially the most generally used CSS properties to the extent that it may.
That mentioned, it’s confirmed to be ok in follow, because it was used within the Subsequent.js Conf 2024 badge to attain this sort of impact.
Then again, one other strategy is to make use of the SVG <foreignObject> ingredient. Whereas html2canvas contains an choice to render HTML this manner, its implementation is pretty minimal, so you possibly can obtain the identical outcome with out counting on the library.
The SVG <foreignObject> ingredient enables you to embed HTML content material inside an SVG. From there, you need to use native browser APIs to serialize the SVG right into a base64-encoded picture and draw it onto a canvas, as proven in this instance. As soon as once more, not all accessibility options or properties are preserved with this strategy.
Last Ideas
HTML-in-Canvas appears like a type of concepts that makes you marvel why it didn’t exist earlier than. It’s nonetheless early and experimental, however the potential is evident. If this route holds, we would lastly cease considering by way of “DOM vs. canvas” and begin treating them as a part of the identical rendering pipeline.
That’s a significant shift.


