|
Post by awpeacock on Apr 7, 2023 13:22:14 GMT
The capabilities of computers and AI has absolutely gone through the roof recently. You can't avoid reading about ChatGPT at the moment but, more importantly, there are the Pope coat pictures, plus the Boris Johnson and Donald Trump arrest photos. I'm sure I've also read that AI is now also producing videos upon request. Which leads me to the obvious question...
How long until AI is able to "recreate" the missing episodes? Considering it feels like (the infamous 6 episodes aside) it is unlikely there will ever be another find of missing Who, is this going to be our best hope of getting Doctor Who back? Most stories have some existing moving footage from which a computer could work from and, of course, there are the telesnaps as well. Combined with the audio and being able to feed in descriptions of what's happening, the speed at which the technology is moving it doesn't seem to me too far fetched to imagine these could be produced.
Of course, there could be an exorbitant cost to it at the moment but it doesn't seem like there's much of a barrier to people producing what they already have using AI. Is this the way forward? Is there anybody who knows more about the capabilities and the potential issues?
|
|
|
Post by John Wall on Apr 7, 2023 21:37:51 GMT
Considering what we’ve had in recent years I’m reminded of Clarke’s First Law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
Remember the colour information from domestic video recordings combined with black and white telerecodings to produce excellent picture quality and then extracting colour information from black and white telerecordings……
Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”
|
|
|
Post by Simon B Kelly on Apr 8, 2023 19:12:11 GMT
Having played around with some of these AI tools myself, I was thinking exactly the same. With AI software being able to generate life-like moving pictures instantly upon demand this is a real game-changer for animation studios. No longer do they need to spend days creating 3D models and months animating them frame by frame. Instead, with the right text prompts, and the right AI software, it can be done in hours. You no longer need to spend a fortune on special effects when a computer can do it all for you in a fraction of the time.
I'm looking forward to all the missing episodes being realistically recreated using AI within the next few years...
|
|
|
Post by barneyhall on Apr 8, 2023 19:42:31 GMT
From my own toying around and knowledge of AI and media creation it's definitely not there yet but it's getting ever closer. I think the most likely useable project would to train it to get from one still to another predicting character movements and basically feed it the telesnaps for an existing episode and get it working on the algorithm of building frames to fill the seconds between each tele snap( which I believe would be around 100 frames) you'd do it on existing episodes first so you could compare what it spits out at you with the actual thing and then tweak the learning till you got more natural movement and facial expressions. You'd do it with multiple episodes till you had a solid algorithm and then feed it missing ones. Even if it needed some human intervention afterwards I'm sure it would get a good representation in a similar way to an animation is a representation. As I said we arnt there yet but the speed it's developing an the application is definitely more than just a dream.
|
|
|
Post by Richard Bignell on Apr 8, 2023 20:42:11 GMT
I think the most likely useable project would to train it to get from one still to another predicting character movements and basically feed it the telesnaps for an existing episode and get it working on the algorithm of building frames to fill the seconds between each tele snap( which I believe would be around 100 frames) It's not 100 frames. A Film recording of an episode would be around 36,000 frames long. Cura averaged 70 telesnaps per episode, so one telesnap for every 500+ frames - and that's not even taking into consideration the problems over scenes scenes where he didn't take any photographs or didn't include characters and numerous other details in the ones he did...
|
|
|
Post by barneyhall on Apr 8, 2023 22:16:41 GMT
OK I didn't realise that all the googlign I did suggested it was between 24-30 frames a second which is where I got my estimate of 100 to fill a 3 second gap came from. But regardless of if my maths was off a trained algorithm once trained would still give a massive leap in gap filling. Even if it couldn't create a full restoration even if it provided some really good extra key from to assist with an animation it would be a big jump from what we have now. And this technology is already moving so fast. If you havnt seen any of the developments I'd highly recommend looking for the two leads at linus tech tips playing with it. In summary they ask it to look at a product on their Webstore (some black and white jogger trousers) and ask it to find something to match them. At first they are confused that it suggests several things with a red accent to go with it despite the items being black and white. They then examine the page chat gpt is using to reference and realise that although red is not mentioned anywhere on the product page the stock photo has someone wearing a red t shirt. The AI had analysed the photo as a whole and not just the trousers. The look on their face as they realise what it is done is priceless. And these are two people that review high end pc parts and software for a living. And that was one generation of chat gpt back so it has already evolved again since then. As I said we arnt there yet but it would be very naive to poopoo it or say irs nor possible this stuffs coming on leaps and bounds.
|
|
|
Post by barneyhall on Apr 8, 2023 22:18:14 GMT
And I did say it would sometimes need human intervention I.e. entire scenes and settings missed by the tellesnaps. But if the machine could do most of the other bits that wouldn't be such a huge task to recreate afterwards.
|
|
|
Post by John Wall on Apr 8, 2023 22:32:49 GMT
And I did say it would sometimes need human intervention I.e. entire scenes and settings missed by the tellesnaps. But if the machine could do most of the other bits that wouldn't be such a huge task to recreate afterwards. Anything that reduces the human input reduces the cost, and that can only be good 👍 This missingepisodes.blogspot.com/p/tele-snaps.html shows the episodes with Telesnaps. There are, unfortunately, a lot of MEs without Telesnaps 👎
|
|
|
Post by simonashby on Apr 9, 2023 20:32:21 GMT
And I did say it would sometimes need human intervention I.e. entire scenes and settings missed by the tellesnaps. But if the machine could do most of the other bits that wouldn't be such a huge task to recreate afterwards. AI requires a huge amount of human input at all levels for truly high-quality results. No doubt things will get better, however as the fidelity and 'realism' of footage generation develops, even more input will be required to get things right. AI does not 'think' - it merely regurgitates the info fed into it - with confidence - even if it makes glaring errors or runs into legal territory with copyright law. Chat GPT is a case in point. The media is getting a little carried away with ChatGPT. No doubt it's a massive step, but it's still full of holes and pitfalls, with many questions of 'where now?' - practically, legally, and ethically. No doubt the power of AI will enable the generation of more realistic animations that can mimic real footage - the closest we can get to finding missing episodes. However it's not a magic bullet. It will be one of a number of very powerful and cost-effective tools in the arsenal of a team of people who will still have a lot of hands-on input to get it right. So all in, I think it'll be great for MEs, but it's not that straightforward.
|
|
|
Post by barneyhall on Apr 10, 2023 9:39:44 GMT
Again I think your getting a little hung up on what is possible now and not what will be possible In the future. The media may be getting carried away but my talking was from actually playing with it and reading some of the developer reports. And for certain tasks now it needs next to 0 human interaction to get data based results. You can already have it absorb a spread sheet of data and ask it analytically question based kn the data it's absorbed and it can give you answers to question aboht the data that would have taken a person a whole working day to produce from analysis in a few seconds. The confidently wrong thing you mentioned was a bigger issue on the 3rd generation of chat gpt. And although it still happens you have to work a lot harder to get it off topic but again the principle is it works with the data its given. Most if the people I see being negative about it either don't understand my original point or don't understand the tech and fundamentally how far it has come in a very short amount of time an how quickly it is developing. The most recent task reports I've read left me equally full of amazement and dread that it really wasn't that far off sky net. I agree with you it doesn't "think" but there was a brilliant instance where during a task it came up against one of those captcha things to prove its not a robot and it struggled to solve it. So it used the app gig rabbit to find a human to solve the captcha for it. When the human asked why do you need this task solving are your a robot. The reasoning data that the developers can see stated it knew it should not reveal it was an AI interface as it may hinder getting assistance and that it needed a plausible excuse. It then told the person on gig rabbit it had a visual impairment that prevented it from seeing the pictures. The human said they were sorry to hear that and solved it for them. So it does "think" but it inferred it needed to lie and what a reasonable lie would be to achieve the task. This interaction was the one that really made me go you know what you've really got to be open minded about this stuff. The second post in this thread sums it up the best.
|
|
|
Post by andyparting on Apr 10, 2023 10:47:59 GMT
It wouldn't have to be all CGI/Deepfake/AI or whatever other new technology comes along the way. You could have real actors, and sets, monster suits, Dalek casings made up to cover 70% of recreating the telesnaps, then the other 30% done on computer could cover the face shots. Be a waste of money doing body motion shots when actors could do all that.
|
|
|
Post by simonashby on Apr 10, 2023 11:57:33 GMT
So it does "think" but... My perspective is that of a software engineer, who has many friends, colleagues and contacts in the field. I am being open minded, but also realistic. It's more than just playing with it and thinking 'oh wow'. That's a perfectly legitimate response, but when you really drill down in to it it still will require a lot of input (however that is defined) in order to produce a high quality result that the BBC would find acceptable as an official release.
|
|
|
Post by Simon B Kelly on Apr 10, 2023 21:05:56 GMT
I would imagine we'll be seeing multiple AI generated recons made by fans of the show long before any official BBC release...
|
|
|
Post by simonashby on Apr 13, 2023 22:49:00 GMT
The trouble with making comparisons with the technology used so far to restore Doctor Who such as Colour Recovery, is that it's relatively simple information that is being recovered from a real source and reference point. Same with RSC and all of the others. And even then it still requires a lot of manual input to bring it up to scratch. There's a massive underestimation about the amount of information required to generate realistic recons. I'm not trying to be a downer, because we will get there, but not necessarily in the way that some here may think. I would imagine we'll be seeing multiple AI generated recons made by fans of the show long before any official BBC release... And like a lot of the upscales, colourisations, and others already found on YouTube, aren't all that good on a technical level.
|
|
Richard Develyn
Member
Living in hope that more missing episodes will come back to us.
Posts: 574
|
Post by Richard Develyn on Apr 17, 2023 11:08:09 GMT
IMO, the best way to recreate missing episodes now, would be to first of all re-film an episode using, obviously, different actors, but try to get as close as possible to how we think an episode must have originally looked, then use AI to substitute in the original actors.
In fact, thinking about this a bit more, you could probably use AI to substitute costumes/monsters, props and possibly even sets, too.
I can imagine a good first trial would be to re-create the Yeti scene in WOF3 with as good a set and yeti costume as we can produce now, then try to use AI to make it look the way it was in the original episode.
|
|