Render view
The View
class offers an abstraction that takes care of several important aspects for you:
- Camera controllers
- Render loop
- Dynamic resolution and detail based on camera movement and idling.
- Resizing
- Restoration of lost of GPU contexts.
- Environments
- Wraps, aggregates and simplifies many of the underlying modules into a single, high level interface.
A view represents the primary interface that most users will interact with. It is an expensive component, however, as each view has it's own GPU context. This means that the entire GPU state will be duplicated for each view. Using multiple views is possible but generally a bad idea.
Prerequisites
Before you can create a view, you need to provide it with some resources.
Canvas
You need to put an html <canvas>
into your DOM and retrieve a reference to it, e.g.:
<canvas id="view_canvas" style="width:100%; height:100%"></canvas>
const canvas = document.getElementById("view_canvas");
The view uses ResizeObserver to automatically handle resizes.
The canvas' width
and height
properties will be set/modified from such resizes.
Unless you explicitly set the CSS size, i.e., the style
width and height, this will once again resize the canvas layout, causing a resize feedback loop.
Make sure to set the canvas CSS size to avoid runaway resize feedback loops!
Device profile
The view needs to know details about the device you're running on.
We define this in the DeviceProfile
interface.
Using this information allows it to adapt the level of detail and render resolution to your device's capabilities.
Alas, browsers in general, and Apple in particular, are secretive about the underlying hardware capabilities, primarily to avoid fingerprinting.
This means there are no easy way to automatically detect a device's limitations and performance characteristics.
While you can manually specify this, we also offer a helper function getDeviceProfile
that will give you reasonable values based on a simplistic GPU tier scheme.
const gpuTier: GPUTier = 2; // mid-tier GPU, e.g. laptop and newer IOS/Ipad device.
const deviceProfile = getDeviceProfile(gpuTier);
Overestimating your GPU capabilities will render slowly and can cause the browser to terminate the GPU context, resulting in flickering and page reloads. When in doubt, go low!
Imports
Sadly, bundlers are still a thing.
Exactly how to wrangle your bundler of choice can vary quite a bit.
You/it may choose to inline some of these resources as base64 strings, or append a hash to their URL to support updates/caching.
The view expects that you provide it with all imports through a ViewImports
object.
To help construct this object, we've added a helper function View.downloadImports
.
This uses a ViewImportmap
object to let you tell it exactly where to find any unloaded resources, or the loaded resources themselves, in case you inlined them somehow.
Most of the guides on bundling will simply copy or map node_modules/@novorender/api/public/
into a browser accessible path: /public/novorender/api/
.
If you followed this recipe, you can simply set the Core3DImportMap.baseUrl
property of the import map as below.
const baseUrl = new URL("/public/novorender/api/", window.location.origin);
const imports = await View.downloadImports({baseUrl});
You could of course give the full URL with domain and port number, but this could make it hard to share the same code for localhost
debugging and production, hence the location.origin
base url.
You can also use import.meta.url
for base URL but this will only work inside ESM modules.
If your code is not running as ESM or import.meta.url
doesn't work, you can keep using window.location.origin
.
For any other variants, you'll have to manually specify the url and/or reference to each resource, making sure you get the exact url the bundler generated, e.g. including hash etc.
View life cycle
With all those plumbing chores out of the way, we're finally ready to create our view:
const view = new View(canvas, deviceProfile, imports);
This will create a GPU context and initialize things, but nothing really happens until you run the view.
await view.run();
Note that View.run
is an async function.
It will run a render loop infinitely, relinquishing control back to the browser after each frame is completed.
This means the browser won't kill it for timing out, like a synchronous loop would.
Behind the scenes it uses requestAnimationFrame.
Every frame it will check if there has been any changes to the render state, and if so, render that new state.
Sometimes you may want to exit this loop and dispose of a view explicitly to free up its resources. To do so, you pass in and signal an abortSignal.
const abortController = new AbortController();
setTimeout(() => { abortController.abort(); }, 10_000); // exit after 10 seconds.
await view.run(abortController.signal);
view.dispose();
Alternatively, you can use the new using
keyword from
typescript 5.2 /
TC39 Proposal if your browser supports it.
using view = new View(canvas, deviceProfile, imports);
await view.run(abortController.signal);
Once 'run' returns, you may resume running the view, e.g. after a pause. Once disposed, the view is no longer usable.
Let's see it in action:
Animation and compositing
If you want to animate anything in the view, you should assign/override the View.animate
function rather than relying on requestAnimationFrame
.
view.animate = (time: number) => {
const t = Math.sin(time / 1000) * .5 + .5;
view.modifyRenderState({ background: { color: [t, t, t, 1] } });
}
Animations will drain batteries quickly. Use with moderation!
If you want to composite your own 2D content into the canvas after the frame has actually been rendered, you should assign/override the View.render
function.
Since this function is called after all render state changes has been resolved and committed for a frame, you may inspect it for your own purposes.
view.render = () => {
const { width, height } = view.renderState.output;
console.log(`Pixel size: ${width}, ${height}`);
}
For more complex scenarios you may want to override the View
with a class of your own and override these functions instead.
The canvas you pass to the view cannot simultaneously be used for e.g. CanvasRenderingContext2D. For compositing you must create a new canvas and overlay with alpha blending.
Environments
We used image based lighting (IBL) to light and shade the geometry. Most of our examples use the default environment, which is a boring 6-pixel cube map with subtle shades of gray. Such a simple environment doesn't do the engine any justice. It's just there to provide at least some light out of the box. Otherwise the output would be completely black by default.
To use a proper environment map, we need to download a set of HDRI cube texture maps with pre-convoluted radiance and irradiance information. The process of how to generate those from an original HDR panorama image is outside the scope of this guide. Arturo's open source IBL baker may help you bake your own. Here, we'll show you how to pick and download one of the default environments.
The view contains a function View.availableEnvironments
that will retrieve a list of available environments from the specified source.
We've already included some environments on our api.novorender.com
server, so that's were we'll point it to.
const envs = await View.availableEnvironments("https://api.novorender.com/assets/env/index.json");
const {url} = envs[0];
view.modifyRenderState({ background: { url } });
You can pick from one of currently 16 environments by changing the index into the envs
array.
Loading them can take some time the first time, so be patient.
Setting RenderStateBackground.url
to undefined
will revert things back to the default environment.
If you don't want the background image to stand out, you can blur it setting RenderStateBackground.blur
to, e.g. 0.5.
Using with React.js and other UI frameworks.
As long as you update the render state correctly, there's no need to explicitly render a view, e.g. as part of a react render.
Calling run()
will make the render loop will run in the background, updating automatically when needed.
You may trigger a re-render indirectly by changing the render state or resizing the CSS layout of the canvas.
In many cases the View
will live as long as your app does, so there's no need to await
or dispose
the view.
In that case, it's just fire and forget!
If you do need to manage the lifecycle of the view, just store the promise somewhere and await after you signalled an abort.
Beware frameworks that "demounts" the concrete canvas DOM element, as the view is tightly coupled with it.
Where things might get more complicated is determining who "owns" which state. It's tempting to borrow/embed the render state as part of you own view state. As long as you can determine when and what parts of that view state has changed, e.g. by using immutable data and reference comparisons like we do, we recommend you duplicate what state you need in your own view state. Or, in other words, keep a single "source of truth" and update the render state accordingly.