The future of our roles in 3D


#1

In the interests of creating some actual conversation in this place, that’s not petty (admitting that I’m guilty of pettiness too on occasion), and is actually constructive. I’d love to start a conversation not just about R21 and subscription costs, but where our skill-sets and our beloved software will be in 3-4 years time. Because ultimately we as users, will shape C4D’s future to some extent.

You see, personally, I want to get the hell out of traditional 3D production, whether that’s boring corporate designs for Apple, or some half baked motion graphics piece for Salesforce. I’d like to start exploring what’s next: emerging technologies like VR, AR, virtual production and more. With C4D’s ease of use, there’s a lot of room for us to grow in these areas.

Several things I’ve seen online lately have blown my mind. New technologies that people with the financial means and motivation, are slowly introducing to production. They’re ideas I’ve had myself in the past, but ultimately things I couldn’t pursue due to lack of funding or any understanding of how to get my own concepts into development. Something that’s about to change… Anyway.

Consider The Lion King re-make… Aside from its failure in terms of box office success and the critical uncanny valley animation, it represents a massive shift in the way we think about film making. This was shot in a gaming environment, all in VR, w/ makeshift weighted camera rigs that felt real, a whole crew inside of a VR world, making creative decisions. And then the ability to take that recorded information from Unity and some how transfer it over to Houdini. That’s the kind of thing we should be figuring out how to do in C4D. Like how can we capture data and the get it up and running in C4D. That’s a conversation I’d love to have with you all.

You can read all about that here: https://www.wired.com/story/disney-new-lion-king-vr-fueled-future-cinema/?fbclid=IwAR2vJg4SQBhQJ-53comOjT2HMNVWHRenvFbrpaIF1vPTkr9ayOqrO9MbiNc

Another example of incredible step forward in the world of digital sets, is this effort from Unreal in collaboration with Lux Machina. Please watch, it will impress ---- https://www.youtube.com/watch?v=kFBha5BE38k
These guys are taking a tech demo from 2 years ago: Mapping projected digital environments to camera movement. Ever since I saw the original sci-fi demo, I’ve been thinking about it’s potential to shoot low-budget, but high quality CGI short films.

Personally I have an AR project in mind that’s a great use case, something that doesn’t deviate greatly from the aforementioned concepts, but has yet to get a mention. And when I think about it, it’s as though If I don’t create it myself, it’s creation will almost be an inevitable consequence of AR development. So instead of reading the news 5 years from now and thinking “Son’s of bitches stole my multi-million dollar idea”, I’m jumping on it now and seeking investors soon.

And on that note, I want to talk to possible collaborators here, about our ideas, about how Maxon may see what we’re doing and adapt, about ways to shake up the industry moving forward, keeping our world relevant, while sticking with our favorite software.

I think these sorts of conversations in this small community, can lead to bigger things.

Anyway - End rant.
Just hoping to spark some interesting conversation.

Stuart Lynch | 510-541-5158 | http://www.project1media.com


#2

I can already do that now, in C4D, in VR. But since there is really no one to sell it to I have not released any further updates to the plugin. You can walk around in VR and view your scene. You can track other objects in VR also. I just need to stick a render to texture for the camera onto a dummy camera rig you hold in your hands and you could then motion track it as if you were filming using a real camera, while in VR. I could then add in iPad AR views to accompany it as well, and also track their positions and locations. All very cool tech, very easy to implement, but very niche. Perhaps in another 3-4 years as you say this will become more available as the price of the gear comes down and makes it accessible.

I was going to write a live connection to UE4 and have a custom build running for higher quality visuals and feedback also. But its difficult to find funding for this. Also Cineversity has a half implementation of this and I knew Epic themselves were working on the C4D file integration. But perhaps soon I could look at that again and have a live view of the C4D scene viewed using a UE4 viewer instead of my custom OpenGL engine. And again track and record all the VR trackers back in C4D as you go. None of this is hard to do. Just takes development time.

I have also talked directly with people at Maxon, Nvidia, Adobe and HTC. Sadly no funding for me on my VR concepts. The market was just too small and the cost to get onboard was too high. Its getting better now though.


#3

I think AR/VR in their current forms will go the way of 3DTVs.

There’s a lot of interesting work being done but really just at novelty level and it won’t be a thing until the tech is miniaturised and we are wearing glasses that don’t look ridiculous. Whipping out your mobile phone or holding up an iPad in public is not a natural thing to do and until this aspect is resolved AR/VR is work in progress.

The Apple keynote demos of people holding iPads up around empty tables or a child pointing their iPad at a pile of lego bricks to see some baked animation always comes across as forced and frankly pointless. AR/VR is DOA at this point.

Sure there’s a lot of money to be made selling the dream of AR and what it might be and that’s the state of the industry right now. But even if glasses can be miniaturised and are powerful enough who wants to live in a world where everyone is walking around with a CCTV camera recording everything that moves or is said? I don’t.

I’m far more interested in the role of mobile phones with their increasing CPU/GPU power and vast array of sensors that can act as affordable photogrammetry and motion capture devices for the 3D pipeline of today. I’d like to see my video cameras have IR and depth sensors and run apps that allow not just high quality video to be captured by a host of meta data like motion capture and depth information.

There’s also the spectre of AI and how that can assist or make us redundant at the flick of a switch. There’s certainly a lot of interesting research going on with computer vision, like EBsynth for example. Turn your video or 3D animations into moving watercolour paintings from a single frame.

The combination of mobile phone tech and AI computer vision tech are the most interesting areas for media production for me.

C4D won’t be leading the discussion on any level, Maxon is content to be a follower and some way off the pace too. The trail is being blazed in the open source field on Github. The more you immerse yourself in this you get small glimpses of what one day might be possible.

LOL, I wrote what I did then watched the Next Level Cinematography video! Doh. We’re on the same page.


#4

Great thread!

Im very interested in AR, and that interest one of the few things keeping me stuck to a Mac. I feel like Apple’s AR is way ahead of everyone else, and Apple’s GPU and CPU performance in mobile in fantastic, unlike their home computers.

I’m very interested in Project Aero and desperately want to know more about it!

I’ve been intrigued by AR ever since seeing the WIngnut AR demo at WWDC 2017. Start at 0:54: https://youtu.be/S14AVwaBF-Y?t=54

I feel like Apple is taking a good approach. People already have phones, why try to create a new device for public consumption? For once, Apple’s tight controls on hardware play in their favor for AR kit. I have android and iOS devices and the AR stuff I’ve tried on Apple hardware is way, way better.

Just me being a nitpicker: Lion King was very successful. $482 million US, $863 million non-US. $1.3 billion is not bad for under 30 days of box office!


#5

It’s funny to see those Disney guys sitting around with Vives on their heads.

I have a Vive and a room setup for VR. Last year I added a green screen along multiple walls and had multiple trackers set up, including a tracker mounted on Sony A7SII camera.

Worked a bit with Kent’s c4d plugin and spent a lot of time in Unity and Unity VR.

Life intervened to put a hold on my ambitions. I don’t know if/when I’ll be able to get back to that.


#6

I know nothing really about VR. But I see changes in 3D animation too.

everything needs to get faster, the clients demand it… cheap and fast but high quality. That is the developement.
I could imagine that unreal publishes its technology as render engine, then maybe even redshift will be considered slow.
compositing needs to get faster too. I would imagine, that either realtime compositing software gets better to use (with invisible convertig into GPU readable formats). or the well known compositing software packages get faster. by the way, what would you say is the fastest compositer for 3D artists?

even thoug I like to work with fast hard an software, I do not think, that this really is good for us, the artists. Faster work > cheaper jobs > more jobs per Artist > less Artists.

and also AI is quite impressive. I guess it is just a matter of some years, till you define a Backround by typing in some words: Desert, Sunset, ruins in the back. then you can move the stuff around a bit and … done in 30 min.

I don’t want to be to pessimistic, but I think it will somehow go into this direction.


#7

I’ve been following the AR/VR scene quite closely for the last two years, mostly as an enthusiast, but have been learning Unity, VR180/360 workflows, and VR dev for about the past six months hoping to turn this into a new career field at some point.

Have also acquired many of the latest headsets: Playstation VR, Oculus Go, Oculus Quest, Oculus Rift S, Valve Index and a number of VR cameras and audio gear to match.

Not likely. While the AR/VR market hasn’t exploded like many had hoped, it’s been growing steadily for some time now. Demand for Oculus’ new standalone Quest has been extremely high. They are “Selling as fast as they can make them.” say Zuckerberg.

Microsoft (HoloLens 2) and Magic Leap (One) are ahead of Apple in AR. And all the latest rumors are that Apple has suspended work on their AR devices. No doubt when Apple finally does release something, it will have a major impact on the market.


#8

They may have suspended work on hardware (hololens/magicleap/mixed reality headset type device) but are seriously hiring up for developers in both AR and VR areas.

https://jobs.apple.com/en-us/search?search=ar%2Fvr&sort=newest&location=united-states-USA

Note that I haven’t heard anything myself regarding any hardware, but wouldn’t surprise me to see a device released in a couple of years.


#9

Yes, I know they’ve been aggressively hiring AR/VR positions recently. Apple won’t release a product until they can miniaturize it enough to be fashionable. Not likely to see an Apple AR product until at least 2021 or later would be my guess.

Also doubtful that Apple will ever release a dedicated VR device. Though future devices will likely contain both AR and VR functionality in one.


#10

I was thinking that Apple’s phones and tablets are more numerous than any other AR devices out there. Magic Leap may be better, but they haven’t sold millions and millions of units like iPhones.

I haven’t been persuaded to get any dedicated AR device. personally, I’m more likely to explore Apple mobile AR than anything else because regular folk will be able to try it.

I’m most interested in real time engines, because I’m an impatient man. :grin:

I’ve seen people mention “Redshift RT” on these threads, but I don’t see much about it anywhere. I did see it mentioned in a slide in a RS presentation. Anything real time gets me excited, which is why I kept an eye on Blender Eevee and U-Render.


#11

The real time led screen sets offer some cool things but they aren’t ready yet. I wish i could be more specific but I’m too tied into several projects to speak freely. Just keep in mind in any making of featurettes for films that they are a marketing tool made by marketing people not by the artists who worked on it.

The worst part of being on the state of the art vfx projects is when you see the bs they sell to the audience.


#12

I’d take anything Suckerberg says with a massive pinch of salt, he’s a known liar. The current state of VR/AR is naff and a novelty and to get current iPad tech miniaturised into a non-ridiculous looking glasses is at least a decade away. The holding iPad to view a novelty 3D object matchmoved on the coffee table in front of you is only interesting a couple of times. The fake forced grins in the marketing promos just enforce my belief, if the tech was as great as we’re supposed to believe the grins from the actors would be real, they wouldn’t have any trouble selling it.

I can see a number of areas that it make a good fit for AR, in vehicle HUDs but nothing like mass adoption because walking around with a huge iPad held out in front of you makes you look a bigger tool than your average Glasshole ever did. Until AR is actually cool it’ll continue to be a niche.

The 3D film and TV production is niche but at the time the marketers were telling us it was the next big thing and everything would be 3D. It didn’t turn out well. We are at the beginning of the marketing phase for AR and more and more people are going to be telling us that AR is here and it’s going to do great things in the near future. NAB will even make their tradeshow all about it but until the perceived value of the experience outweighs looking like a tool it won’t take off. No one wants to look like a tool in public.


#13

Redshift RT is in its very early stage of development, at Siggraph there was a very early tech demo attendees to the Redshift area could see.

Redshift RT is a second rendering path for the Redshift render engine, the user will be able to freely select either full Redshift or RT in the render settings without having to change the materials on their objects. Eevee for example is not compatible with all Cycles shader networks so you can’t switch over with 100% compatibility. RT uses Microsoft’s DXR API on PC and Metal on Mac to access any Ray Tracing hardware on the GPU but it also supports GPUs without RT cores so it is intended to work on AMD GPUs on the PC too.

The Redshift developers were originally game engine developers so they seem the right team to take this task head on. Panos said there maybe some limitations i.e. motion blur and transparent objects and AOVs but they intend to support as much of full Redshift as possible.

After using Eevee and getting used to 1 and 2 second renders look forward to seeing what the Redshift team bring to the table with real ray tracing support. I wouldn’t hold your breath though the project sounds like it has only just started.


#14

Having worked on several AR and VR projects for the last five years I can understand why it is not making it into mainstream production. The reason is: the tech is annoying, uncomfortable and just adds another layer of technology complications to production. Did the Films or pictures got any better because of the VR production? No. Look at the set for the shooting. Completely bloated production. Overcomplicated. I 3D modelled in VR apps and after 2 minutes I wanted my mouse and keyboard back. Shortcuts!

There are great possibilities in tech, science and porn to go VR but like for example deep compositing it is overkill for the simple 3D Artist in media production. But I am waiting to be called out for my negativity in the next years :wink:


#15

Just adding here ie AR/VR.

For client work, there can be a lot of limitations as to what you are able to achieve. If its AR, complexity is limited which can affect the look and feel–usually for the worse. Same goes for VR. We can do our best work when doing pre rendered VR animation. All the great bells and whistles you get in Unity and Unreal dont translate when working with client constraints of the chosen mobile device like Oculus Go or even the Quest.

Going all out with the Rft is another story–you can do some amazing things in realtime with the right hardware. Often, in our case the limitation is where the experience will be delivered–which is usually trade show booths or training events which require sometimes hundreds of devices hence the choice of more portable less expensive mobile hardware.

These have been my frustrations with realtime, however I know in time things will change.


#16

Biggest problem with VR even if the hardware issues are fixed is that it requires your attention fully, so you get tired much faster. You can watch TV casualty, lets give it 10% of brain CPU… and if something interesting appear it just clicks and we increase our attention, but VR occupies most of your brain.


#17

Joel, could you please elaborate on that? Are you talking about PLA or other features that Unity/UE have issues with?


#18

VR imposes hardware constraints beyond that of standard gaming. For simplicity think of what the hardware demands would be for 8k gaming. Simpler shaders, lower geo, etc…is required. Tech like foveated rendering will in time help bring graphics fidelity closer to PC games, but there always will be a gap.

I find the immersive experience more than compensates for some loss of eye candy.

It’s not growing nearly as fast as I anticipated but it is still growing. Growing fast, in fact, but still only around 1% of users.

vr-headsets-on-steam-december-2018-2-1


#19

Unity and UE arent the issue–you can do some amazing things with them. Its the device that you are building the experience for that is the limiting factor. Ultimately you are deploying some sort of app/experience that will be running on a headset (if VR) or iphone/android/ipad (if AR). You may need to simplify the geo, effects, PLA etc in order to get something that runs smoothly. We found that doing everything from within Unreal or Unity–like using native shaders and effects and animation rather than importing these things from c4d helped playback.


#20

Good / useful topic, Priest. I read your post the other day and found the aspect of creating a film by shooting against a procedurally generated backdrop (vs a standard green screen approach) to be pretty interesting. I consider this a type or version of AR, and IMO to expand on what infograph said, is a more promising technology than the “Pokemon on iPhone” school of AR. Which I think is a novelty like 3DTV was, and has limited utility generally. There are randomly some cool AR phone applications that have nothing to do with creativity (like an app that can measure all the floorboards in a room to tell you how many sq ft of carpet you will need). But on the entertainment side I don’t think it will last. It has already died down a lot and for good reason: it’s fatiguing and ultimately not a convenient way to experience things (we tend to wrongly assume anything on a smart phone = convenience).

And when integrated with eyeglasses is a serious privacy invasion concern if you take the would-be wide-scale adoption to its logical conclusion. The world filled with Glassholes is not a better place to be, and won’t be if Apple releases their version either. Invasive is invasive.

But the “live background” approach to filming, analogous to the VJ movement where people have these amazing projection systems to match their art to music and project it onto structures, has a big future probably. Not exactly sure how C4D will fit into that world but it wouldn’t surprise me to see them build in some kind of module one day for serious, RT procedural terrain generation or stuff like that. Their GPU support has to get a lot better (Metal on Mac, etc) though and their viewport an order of magnitude better to make something like that work realistically.

But in the larger spirit of the topic, I’ve always found 3D creation to be way more complicated than it should be. Even a relatively friendly app like C4D takes a lot of dedication to become modestly proficient at it, to the point where someone says to you “hey can you mock up a 20 second animation or a model that looks like ______ from this other movie _____ except that ______.” and you can make it in a few hours or days’ time depending on how complex. It’s not like photoshop where most people here could probably be given a stack of raw files and turn it into a nicely tonal- and color-balanced HDR pano, with those annoying telephone poles removed, in the span of 5 minutes. The app is complex and deep, but refined enough that even complex operations can be learned and accomplished pretty quickly. Certainly not a 1:1 analogy / not totally fair because of the added dimension but in general I think it’s accurate. 3D is still too complicated.

After Effects and Illustrator are a bit more complex and more difficult than Photoshop, but still easier probably for most creatives than learning a 3D app for the first time or diving into some part you’ve never used before (UV editing, whatever it is). The one possible exception is sculpting. Most 3D apps with sculpting have it set up in such a way that it’s more intuitive. There are more tool types, yes but together the flow is more intuitive when you’re molding that thing out of primitive. Point of all this being, one wonders as the AI and ML sciences start to mature in this area, if MAXON and others might not be able to leverage those things to abstract away some of the UI complexity and simultaneously speed up the process of look development. Make that amazing space cruiser you want to model take 2 or 3 hours instead of 2 or 3 days of hard work.

I’m not exactly sure how it would look or work, but it would be pretty cool if in the same way that a good DSLR has camera meter logic that includes many thousands of scene types, and as you point the camera at the subject and it takes in the general lighting, can help you find the correct manual settings… you still need to know what you’re doing, but compared to cameras from 30 years ago, you’re much more likely to get the right result on the first try. If 3D apps had a sort of block-level primitive definition of thousands of objects that might be created, and ML, and you start there… by defining the type of thing you’re about to build, and as you start laying out the foundations of it, the app intelligently uses some sort of overlay hinting or the like to help you move through it much more quickly. Or something like that.

I’m no UX guru and certainly not a programmer so my example is probably poor but you get the idea. We need to find ways of making 3D creation more efficient and intuitive without “dumbing it down.” For the dumbed down crowd there will be stuff like the new Adobe Dimension and that’s perfectly fine for making retail package designs or fun 3D composite scenery or the like. We need to have 3D apps that are deep and capable but at the same time have some sort of intelligence to them to help greatly reduce the minutia and monotony of the process of making complex models, scenes, etc. To me a lot of times 3D still feels more like a mechanical / technical task than a creative one.

That needs to be bridged… somehow.