|
Post by awpeacock on Nov 2, 2023 18:54:41 GMT
Since the big release onto iPlayer yesterday, I started watching Power of the Daleks. And beyond being impressed by how delightfully atmospheric the story comes across with a slow build of tension, one other thought occurred to me.
I've mentioned before about AI recreating the missing episodes (and other people have posted short clips of AI generated clips off telesnaps) but the main drawback most people bring up, is the amount of data, inputs and instructions that would need to be provided for any AI to successfully produce moving images. But, with so many animations now produced, are they not likely to be a very good source of data/information to feed into an AI?
If you could combine telesnaps, animations, scripts (with "stage directions" for want of the correct term) and the audio, there really is a wealth of information that could be fed in. Do we know if anybody has tried (or is thinking of trying) using animations like a kind of "motion capture" for a system to overlay "live" actors over the top of?
(On a side note, it truly is a shame this story doesn't exist. As such an expressive actor, it would've been great to see the original take of Troughton on the role - even just from the audio, you can instantly spot the difference from the friendly, charming space hobo he became, and animation will never do him justice).
|
|
|
Post by jamesvincent on Nov 6, 2023 18:45:21 GMT
The tech to do this is coming, no doubt multiple people will make various versions and post them on youtube. It'll keep improving until a satisfactory version of each episode is produced. No doubt in 10 years we will have 4k versions of missing episode far Superior to the originals!
|
|
|
Post by stevegerald on Nov 7, 2023 4:10:51 GMT
Animated people and real people move differently.
|
|
|
Post by awpeacock on Nov 8, 2023 19:01:22 GMT
Animated people and real people move differently. No, and some of the movement I've seen in these animations has been less realistic than in Scooby Doo. But I was thinking that it would at least provide a point of reference as to camera pans, etc and at least a direction of travel for characters that the AI could interpret more realistically. It would probably be quite amusing actually to see "real" people made to move like some of these animations
|
|
|
Post by Marie Griffiths on Nov 11, 2023 13:48:04 GMT
I watched the Reign of Terror for the first time last night and the animations are fantastic. The camera direction perhaps better than the orginal. I would love to see Ai combined with this for a photorealistic production.
|
|
|
Post by awpeacock on Nov 11, 2023 15:40:14 GMT
I watched the Reign of Terror for the first time last night and the animations are fantastic. The camera direction perhaps better than the orginal. I would love to see Ai combined with this for a photorealistic production. I've binged through all of them for the first time thanks to iPlayer and I'd agree - RoT absolutely stands out as head and shoulders over the rest of the animations (I then followed it up with Galaxy 4 - good grief, chalk and cheese). That one I think could easily be used as a reference point I would've thought (assuming AI could accurately use animation as a reference point). The only question I had was whether or not the original direction really had quite so many extreme close ups?
|
|
|
Post by John Wall on Nov 11, 2023 20:03:22 GMT
I watched the Reign of Terror for the first time last night and the animations are fantastic. The camera direction perhaps better than the orginal. I would love to see Ai combined with this for a photorealistic production. I've binged through all of them for the first time thanks to iPlayer and I'd agree - RoT absolutely stands out as head and shoulders over the rest of the animations (I then followed it up with Galaxy 4 - good grief, chalk and cheese). That one I think could easily be used as a reference point I would've thought (assuming AI could accurately use animation as a reference point). The only question I had was whether or not the original direction really had quite so many extreme close ups? TV tended, and tends, to have close ups due to lack of budget for sets, etc.
|
|
|
Post by barneyhall on Nov 13, 2023 0:56:19 GMT
I watched the Reign of Terror for the first time last night and the animations are fantastic. The camera direction perhaps better than the orginal. I would love to see Ai combined with this for a photorealistic production. I've binged through all of them for the first time thanks to iPlayer and I'd agree - RoT absolutely stands out as head and shoulders over the rest of the animations (I then followed it up with Galaxy 4 - good grief, chalk and cheese). That one I think could easily be used as a reference point I would've thought (assuming AI could accurately use animation as a reference point). The only question I had was whether or not the original direction really had quite so many extreme close ups? oh really I think I need to watch this again as I was very underwhelmed by it when I watched it on release and not sure I've seen it since. Maybe it's because I'm not particularly a fan of RT anyway but I had it penned as my least favourite animation until the web 3 one appeared. I loved galaxy 4 too and looking forward to more hartnells to compare it to jn the future.
|
|
|
Post by John Wall on Feb 15, 2024 20:54:42 GMT
|
|
|
Post by markperry on Feb 15, 2024 21:24:35 GMT
What you have to remember with the telesnaps is we only have frames or corrupted frames affected by scene changes or say during camera changing to another. An average episode contains more than a 1000 frames. There's only so much AI can get away with. And that's before we have matching photos of those scenes.
|
|
|
Post by Richard Bignell on Feb 15, 2024 21:59:31 GMT
What you have to remember with the telesnaps is we only have frames or corrupted frames affected by scene changes or say during camera changing to another. An average episode contains more than a 1000 frames. Just a few more. A film recording of an episode would be around 36,000 frames long. Cura averaged 70 telesnaps per episode, so one telesnap for roughly every 500+ frames.
|
|
|
Post by brianfretwell on Feb 23, 2024 19:27:55 GMT
TV tended, and tends, to have close ups due to lack of budget for sets, etc. And, with 405 lines and small screens to give better redition of actors faces and lip movements for those hard of hearing, I suspect.
|
|
|
Post by John Wall on Feb 23, 2024 21:37:13 GMT
And, with 405 lines and small screens to give better redition of actors faces and lip movements for those hard of hearing, I suspect. That’s a good point but I’m unsure if lip readers were considered important. Does anyone know?
|
|
|
Post by Jaspal Cheema on Feb 24, 2024 23:51:27 GMT
I watched the Reign of Terror for the first time last night and the animations are fantastic. The camera direction perhaps better than the orginal. I would love to see Ai combined with this for a photorealistic production. I've binged through all of them for the first time thanks to iPlayer and I'd agree - RoT absolutely stands out as head and shoulders over the rest of the animations (I then followed it up with Galaxy 4 - good grief, chalk and cheese). That one I think could easily be used as a reference point I would've thought (assuming AI could accurately use animation as a reference point). The only question I had was whether or not the original direction really had quite so many extreme close ups? Encouraging that 2 members really appreciate the ROT animation as it's always been considered to be flawed in terms of it's relentless close-ups and quick cutting within scenes.I've also always thought it was superb and innovative and one which brings me back time and again to what would have been a dry historical serial and yes,the animation is better than the slow camera direction of the original.
|
|
|
Post by John Wall on Feb 25, 2024 20:31:02 GMT
It’s worth remembering that the aim isn’t to extrapolate the whole of reality from a piece of fairy cake.
I expect that the animations start with the drawing up of a camera script and then sorting out the sets/backgrounds. It’s then a matter of “putting” animated figures “into” the sets and making sure the lips synch with the sound track. There are also usually 2-3 telesnaps for every minute - say one every twenty to thirty seconds.
The purpose of AI would be to achieve a better, more realistic, recreation of a ME. Hopefully there would be enough visual material to produce a decent representation of the original sets rather than the “sketchy” versions often seen in the animations. It shouldn’t be too difficult to produce a camera script, remembering the constraints of multi-camera recording in the 60s. Then you “give” the AI the script and audio and tell it to “film” it according to the script but matched to the audio.
|
|