Monday, July 24, 2017

3ds Max plugin developer wanted

As much as we despise Autodesk and would rather see the entire company go down in a pool of radioactive, fiery plasma (the eyebrow-scorching kind that is), the fact of the matter is that a sizeable chunk of the 3d artists out there remains loyal to 3ds Max for whatever reason. Due to this shocking fact, we're looking for an outstanding 3ds Max plugin developer with the skills to integrate our technology into 3ds Max (this role is in addition to the two roles advertised in the previous post: the graphics developer and full-stack developer).    

What we're looking for:

  • 2+ years of experience developing plug-ins for 3ds Max
  • Solid understanding of 3d artist workflows
  • Experience with rendering (this is a rendering plug-in)
  • Knowledge of real-time data streaming protocols and technologies (WebSocket etc.) desirable
  • Keen to keep abreast of the latest cutting-edge technologies in the fields of graphics and rendering

This is a remote contracting role. Send your application to sam.lapere@live.be

Friday, July 21, 2017

Excellent computer graphics developers wanted

As we are nearing the launch of our product, our team is expanding. We're currently looking for a graphics developer and a full-stack developer to join our team in NZ with the following skills and experience:

3D Graphics Developer

  • Bachelor, Master or PhD in Computer Science or similar field
  • Specialisation in computer graphics
  • (Constructive) Solid experience with parametric and non-parametric 3D modelling algorithms
  • Strong mathematical background (especially linear algebra + multivariable calculus)
  • Very good command of C++11 and OpenGL
  • Web development experience desirable
  • Experience with functional programming languages such as Erlang and Haskell is a plus (but not required)
  • Love for learning cutting edge experimental languages and frameworks
  • Flexible, can-do attitude
  • Perfectionist attitude and obsessed with quality
  • Be part of a very fast moving team
  • Keen to move, live and work in New Zealand

Full-Stack Web Developer

We're also looking for a top notch full-stack web developer to join our team. Candidates for this role should have:

  • Bachelor or Master in Computer Science or equivalent field
  • Minimum of 3 years of working experience with front-end and back-end web development (e.g. Django, Angular.js and Bootstrap)
  • Hands-on experience with and an unbounded passion for real-time high quality 3D graphics (WebGL, physically based rendering, see e.g. https://github.com/erichlof/THREE.js-PathTracing-Renderer#threejs-pathtracing-renderer)
  • Experience with dynamic languages such as Go desirable
  • Knowledge of WebAssembly desirable
  • Creative and original problem solving skills
  • Unrelentless hunger to learn more and become an expert in your field
  • UI design skills are a plus
  • Ability to work independently
  • Show initiative and be highly motivated, perfectionist and driven
  • Keen on working in NZ 

Send your CV to sam.lapere@live.be 

Applications will close once we find the right candidates to fill the role

Sunday, July 9, 2017

Towards real-time path tracing: An Efficient Denoising Algorithm for Global Illumination

July is a great month for rendering enthusiasts: there's of course Siggraph, but the most exciting conference is High Performance Graphics, which focuses on (real-time) ray tracing. One of the more interesting sounding papers is titled: "Towards real-time path tracing: An Efficient Denoising Algorithm for Global Illumination" by Mara, McGuire, Bitterli and Jarosz, which was released a couple of days ago. The paper, video and source code can be found at


Abstract 
We propose a hybrid ray-tracing/rasterization strategy for realtime rendering enabled by a fast new denoising method. We factor global illumination into direct light at rasterized primary surfaces and two indirect lighting terms, each estimated with one pathtraced sample per pixel. Our factorization enables efficient (biased) reconstruction by denoising light without blurring materials. We demonstrate denoising in under 10 ms per 1280×720 frame, compare results against the leading offline denoising methods, and include a supplement with source code, video, and data.

While the premise of the paper sounds incredibly exciting, the results are disappointing. The denoising filter does a great job filtering almost all the noise (apart from some noise which is still visible in reflections), but at the same it kills pretty much all the realism that path tracing is famous for, producing flat and lifeless images. Even the first Crysis from 10 years ago (the first game with SSAO) looks distinctly better. I don't think applying such aggressive filtering algorithms to a path tracer will convince game developers to make the switch to path traced rendering anytime soon. A comparison with ground truth reference images (rendered to 5000 samples or more) is also lacking from some reason. 

At the same conference, a very similar paper will be presented titled "Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination". 

Abstract 
We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter. This hierarchy allows us to distinguish between noise and detail at multiple scales using luminance variance.  
Physically-based light transport is a longstanding goal for real-time computer graphics. While modern games use limited forms of ray tracing, physically-based Monte Carlo global illumination does not meet their 30 Hz minimal performance requirement. Looking ahead to fully dynamic, real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10x more temporally stable results, matched references images 5-47% better (according to SSIM), and runs in just 10 ms (+/- 15%) on modern graphics hardware at 1920x1080 resolution.
It's going to be interesting to see if the method in this paper produces more convincing results that the other paper. Either way HPG has a bunch more interesting papers which are worth keeping an eye on.

UPDATE (16 July): Christoph Schied from Nvidia and KIT, emailed me a link to the paper's preprint and video at http://cg.ivd.kit.edu/svgf.php Thanks Christoph!

Video screengrab:


Not convinced by the quality of filtered path traced rendering at 1 sample per pixel, but perhaps the improvements in spatio-temporal stability of this noise filter can be quite helpful for filtering animated sequences at higher sample rates.

UPDATE (23 July) There is another denoising paper out from Nvidia: "Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder" which uses machine learning to reconstruct the image.


Abstract 
We describe a machine learning technique for reconstructing image se- quences rendered using Monte Carlo methods. Our primary focus is on reconstruction of global illumination with extremely low sampling budgets at interactive rates. Motivated by recent advances in image restoration with deep convolutional networks, we propose a variant of these networks better suited to the class of noise present in Monte Carlo rendering. We allow for much larger pixel neighborhoods to be taken into account, while also improving execution speed by an order of magnitude. Our primary contri- bution is the addition of recurrent connections to the network in order to drastically improve temporal stability for sequences of sparsely sampled input images. Our method also has the desirable property of automatically modeling relationships based on auxiliary per-pixel input channels, such as depth and normals. We show signi cantly higher quality results compared to existing methods that run at comparable speeds, and furthermore argue a clear path for making our method run at realtime rates in the near future.

Monday, July 3, 2017

Beta testers wanted

In the past several months, we have been developing a novel ultrafast photorealistic rendering application and we're almost ready to unleash our beast onto the world. In our humble opinion, we think our innovative, pioneering and revolutionary tech is going to be groundbreaking, earth-shaking, paradigm-shifting, status quo defying, industry-disrupting and transmogrifying, and be greater than the Second Coming of Sliced Bread! In short, we think it's going to be rather good.

We're currently looking for some outstanding beta-testers who have extensive experience with one of the following 3d modeling packages:

- 3ds Max
- Maya
- Cinema 4D
- Modo
- Blender
- LightWave 3D
- SketchUp

and a ray tracing based rendering engine like V-Ray, Corona, Cycles or similar.

The perfect candidate has also won or been nominated for a Montgomery Burns Award for Outstanding Achievement in the Field of Excellence.

To apply, send an email with a link to your artist portfolio to sam.lapere@live.be (people with low frustration tolerance need not apply).

UPDATE: Applications are now closed. Thanks to all who have applied.

Sunday, May 21, 2017

Practical light field rendering tutorial with Cycles

This week Google announced "Seurat", a novel surface lightfield rendering technology which would enable "real-time cinema-quality, photorealistic graphics" on mobile VR devices, developed in collaboration with ILMxLab:


The technology captures all light rays in a scene by pre-rendering it from many different viewpoints. During runtime, entirely new viewpoints are created by interpolating those viewpoints on-the-fly resulting in photoreal reflections and lighting in real-time (http://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/).

At almost the same time, Disney released a paper called "Real-time rendering with compressed animated light fields", demonstrating the feasibility of rendering a Pixar quality 3D movie in real-time where the viewer can actually be part of the scene and walk in between scene elements or characters (according to a predetermined camera path):


Light field rendering in itself is not a new technique and has actually been around for more than 20 years, but has only recently become a viable rendering technique. The first paper was released at Siggraph 1996 ("Light field rendering" by Mark Levoy and Pat Hanrahan) and the method has since been incrementally improved by others. The Stanford university compiled an entire archive of light fields to accompany the Siggraph paper from 1996 which can be found at http://graphics.stanford.edu/software/lightpack/lifs.html. A more up-to-date archive of photography-based light fields can be found at http://lightfield.stanford.edu/lfs.html

One of the first movies that showed a practical use for light fields is The Matrix from 1999, where an array of cameras firing at the same time (or in rapid succession) made it possible to pan around an actor to create a super slow motion effect ("bullet time"):

Bullet time in The Matrix (1999)

Rendering the light field

Instead of attempting to explain the theory behind light fields (for which there are plenty of excellent online sources), the main focus of this post is to show how to quickly get started with rendering a synthetic light field using Blender Cycles and some open-source plug-ins. If you're interested in a crash course on light fields, check out Joan Charmant's video tutorial below, which explains the basics of implementing a light field renderer:


The following video demonstrates light fields rendered with Cycles:



Rendering a light field is actually surprisingly easy with Blender's Cycles and doesn't require much technical expertise (besides knowing how to build the plugins). For this tutorial, we'll use a couple of open source plug-ins:

1) The first one is the light field camera grid add-on for Blender made by Katrin Honauer and Ole Johanssen from the Heidelberg University in Germany: 


This plug-in sets up a camera grid in Blender and renders the scene from each camera using the Cycles path tracing engine. Good results can be obtained with a grid of 17 by 17 cameras with a distance of 10 cm between neighbouring cameras. For high quality, a 33-by-33 camera grid with an inter-camera distance of 5 cm is recommended.

3-by-3 camera grid with their overlapping frustrums

2) The second tool is the light field encoder and WebGL based light field viewer, created by Michal Polko, found at https://github.com/mpk/lightfield (build instructions are included in the readme file).

This plugin takes in all the images generated by the first plug-in and compresses them by keeping some keyframes and encoding the delta in the remaining intermediary frames. The viewer is WebGL based and makes use of virtual texturing (similar to Carmack's mega-textures) for fast, on-the-fly reconstruction of new viewpoints from pre-rendered viewpoints (via hardware accelerated bilinear interpolation on the GPU).


Results and Live Demo

A live online demo of the light field with the dragon can be seen here: 


You can change the viewpoint (within the limits of the original camera grid) and refocus the image in real-time by clicking on the image.  




I rendered the Stanford dragon using a 17 by 17 camera grid and distance of 5 cm between adjacent cameras. The light field was created by rendering the scene from 289 (17x17) different camera viewpoints, which took about 6 minutes in total (about 1 to 2 seconds rendertime per 512x512 image on a good GPU). The 289 renders are then highly compressed (for this scene, the 107 MB large batch of 289 images was compressed down to only 3 MB!). 

A depth map is also created at the same time an enables on-the-fly refocusing of the image, by interpolating information from several images, 

A later tutorial will add a bit more freedom to the camera, allowing for rotation and zooming.

Wednesday, January 11, 2017

OpenCL path tracing tutorial 3: OpenGL viewport, interactive camera and defocus blur

Just a link to the source code on Github for now, I'll update this post with a more detailed description when I find a bit more time:



 Part 1 Setting up an OpenGL window

https://github.com/straaljager/OpenCL-path-tracing-tutorial-3-Part-1




Part 2 Adding an interactive camera, depth of field and progressive rendering

https://github.com/straaljager/OpenCL-path-tracing-tutorial-3-Part-2



Thanks to Erich Loftis and Brandon Miles for useful tips on improving the generation of random numbers in OpenCL to avoid the distracting artefacts (showing up as a sawtooth pattern) when using defocus blur (still not perfect but much better than before).

The next tutorial will cover rendering of triangles and triangle meshes.