|
Post by scotttelfer on Feb 23, 2014 21:54:58 GMT
Did you know that if you got every planet in the universe and gave each of them a stack of cards, nobody would still have arranged them in every possible combination since the start of the universe if they were able to shuffle them once a second and still won't have for many more billions of years?
It would be impossible to get right, but I'm sure some predictions could be made. For example, I remember them talking about The War Machines missing about a second of footage so they were able to run a computer program that could fill in the missing frames. I doubt it would be possible to run something like that for several minutes, but a good approximation of quite a bit of footage could be achieved if there were telesnaps fairly close together.
You would be looking at Star Trek holodeck level to get something decent, but even then it wouldn't be perfect. I remember there is an episode of Voyager where Janeway goes into Tuvok's mind and sees Sulu and notes he looks different from his holo deck portraits because the computers still weren't able to predict them accurately enough to match the human eye (although clearly they were by the next century). So we'll be able to reconstruct everything by the 24th century if Star Trek is correct, but equally we'd have FTL travel and primitive time travel capabilities (not to mention alien allies who may have recorded it in the first place) so that isn't really an issue.
On another note, talking about digital data storage, the episodes are all stored on external hard drives at over 300 GB each, which is far more than HD is today, never mind looking at sub-SD quality that most episodes were filmed at. SD is stored by the BBC at 1.4 GB per hour, so the episodes are held at well over 500 times the necessary storage capacity so there will be lots of excess bits in there, so if one goes wrong it can be fairly safely predicted what it should be from the surrounding data.
|
|
|
Post by Marty Schultz on Feb 23, 2014 22:03:49 GMT
'Neural network' - seriously? LOL. . Ithink FORTRAN may be a bit long in the tooth for this made up computer.
|
|
|
Post by scotttelfer on Feb 23, 2014 22:14:52 GMT
I think far better options are available for most episodes, although you would only be able to reconstruct seconds of footage at best, the rest would be monkeys and Shakespeare. The most obvious example would be Marco Polo, which has absolutely no visual evidence of its fourth episode left.
It would certainly be useful for filling in small gaps (as I said, a system is in effect already for "full" episodes) but would it be possible to apply it to smaller gaps? It would certainly be an interesting idea. Many of the telesnap sets start with a few images showing the onscreen credits (saying the story name, the episode number, the writer, etc.) no doubt with minor changes in the background, perhaps somebody who has enough time could try and figure out a prototype version if they have some spare time to reconstruct the starts of a few episodes.
|
|
|
Post by markboulton on Feb 23, 2014 22:51:10 GMT
The improbability, however, of a randomly generated set of pixel luminance values on each successive frame matching something reasonably following the action in the previous frame, or knowing when not to do so (for a scene cut) would be a ginormous multiplicand to the purely quantitative evaluation of possible bit-value combinations.
|
|
|
Post by markboulton on Feb 23, 2014 22:51:54 GMT
Also, you may find when you finally approach the computer and ask him, "Well, have you done it?",
he'll say...
"Yes... But you're not going to like it".
|
|
|
Post by scotttelfer on Feb 23, 2014 23:19:37 GMT
There is one other thing worth pointing out though, the advantage a black and white image is going to have is that if you wanted it to be accurate to within 5% you'd be looking for images you'd be looking at 95-100% AND 0-5% (because a totally wrong image is just going to be the negative). However, that brings us back to the old issue, what would be the point?
How many people have watched streamed footage online? I'm guessing it will be practically all of us. Have you ever had a slight hiccup whilst watching the video? A bit of the image just seems to "freeze" whilst the rest of the image carries on regardless? Allow me to explain why this happens, most images in a video are practically the same as the one preceding it. There will be differences, but for the most part it will be the same. So a file compression system that has to reduce the storage requirement of a video will look for adjacent frames that are near identical and leave part of the screen "blank". The computer then realises there's a part of the image missing so it fills it in with the last image in that part. If you miss a crucial frame, the computer will think it was the same as before and retain the old image there.
What is the point in this explanation? Well as you will know sometimes it is barely noticeable, but other times you get left with big chunks of the screen missing. Those big chunks are where the image should have been identical to the preceding image. In which case for that one part of the image you might as well be looking at the old image.
While not perfect for every episode of Doctor Who, for heavily studio bound episodes where it is just pictures of people's faces the image won't actually change that much (unless it is between shots), in which case you'll probably be getting a fairly accurate image by looking at nothing more advanced than a telesnap reconstruction. Perhaps somebody would care to look at the telesnaps for The Web of Fear, and compare them to the existing episodes we now have and see how much we truly have gained in the surrounding frames, you'll probably find you already have about 80% of the image.
|
|
Simon Collis
Member
I have started to dream of lost things
Posts: 536
|
Post by Simon Collis on Feb 23, 2014 23:23:18 GMT
'Neural network' - seriously? LOL. . Ithink FORTRAN may be a bit long in the tooth for this made up computer. Artificial neural networks are used extensively in detecting cancer tumours, in optical character recognition.. dozens of ways they are used, daily. It's some of the fundamental work in AI - been around since the 1940s. I've used them a few times (I once worked for an insurance company, they used to use them to model actuarials, profit margin modelling, things like that.) My thinking was along the lines of getting the neural network to interpolate between the telesnaps. Yeah, you probably couldn't get more than a couple of halfway frames, but the original proposal was sheer random numbers, so I was trying to at least try and think of another idea that might help (Oh, and here's a major FORTRAN compiler for Windows. Don't worry, it does C and C++ as well )
|
|
|
Post by scotttelfer on Feb 23, 2014 23:29:27 GMT
'Neural network' - seriously? LOL. . Ithink FORTRAN may be a bit long in the tooth for this made up computer. Artificial neural networks are used extensively in detecting cancer tumours, in optical character recognition.. dozens of ways they are used, daily. It's some of the fundamental work in AI - been around since the 1940s. I've used them a few times (I once worked for an insurance company, they used to use them to model actuarials, profit margin modelling, things like that.) My thinking was along the lines of getting the neural network to interpolate between the telesnaps. Yeah, you probably couldn't get more than a couple of halfway frames, but the original proposal was sheer random numbers, so I was trying to at least try and think of another idea that might help (Oh, and here's a major FORTRAN compiler for Windows. Don't worry, it does C and C++ as well )
As I've said before, a similar system is already in use for "full" episodes that miss a few frames here or there, so why not try it out on telesnaps? If that can churn out several frames that are fairly good approximations, then there shouldn't be a problem if the images are fairly close together (such as at the start of episodes).
|
|
|
Post by Marty Schultz on Feb 23, 2014 23:35:40 GMT
'Neural network' - seriously? LOL. . Ithink FORTRAN may be a bit long in the tooth for this made up computer. Artificial neural networks are used extensively in detecting cancer tumours, in optical character recognition.. dozens of ways they are used, daily. It's some of the fundamental work in AI - been around since the 1940s. I've used them a few times (I once worked for an insurance company, they used to use them to model actuarials, profit margin modelling, things like that.) My thinking was along the lines of getting the neural network to interpolate between the telesnaps. Yeah, you probably couldn't get more than a couple of halfway frames, but the original proposal was sheer random numbers, so I was trying to at least try and think of another idea that might help (Oh, and here's a major FORTRAN compiler for Windows. Don't worry, it does C and C++ as well ) The problem with random generation is that it is theoretically more likely that you will produce a new and understandable episode of old who than the reproduction that you are looking for. In regards to a neural network - you may as well animate.
|
|
Simon Collis
Member
I have started to dream of lost things
Posts: 536
|
Post by Simon Collis on Feb 23, 2014 23:47:15 GMT
The problem with random generation is that it is theoretically more likely that you will produce a new and understandable episode of old who than the reproduction that you are looking for. In regards to a neural network - you may as well animate. Which makes the answer to the original question that - whichever way you slice and dice it - it's likely there isn't a realistic proposition of doing it that way.
|
|
|
Post by scotttelfer on Feb 24, 2014 12:20:42 GMT
Artificial neural networks are used extensively in detecting cancer tumours, in optical character recognition.. dozens of ways they are used, daily. It's some of the fundamental work in AI - been around since the 1940s. I've used them a few times (I once worked for an insurance company, they used to use them to model actuarials, profit margin modelling, things like that.) My thinking was along the lines of getting the neural network to interpolate between the telesnaps. Yeah, you probably couldn't get more than a couple of halfway frames, but the original proposal was sheer random numbers, so I was trying to at least try and think of another idea that might help (Oh, and here's a major FORTRAN compiler for Windows. Don't worry, it does C and C++ as well ) The problem with random generation is that it is theoretically more likely that you will produce a new and understandable episode of old who than the reproduction that you are looking for. In regards to a neural network - you may as well animate. You'd be more likely to end up with Ant and Dec appearing in half the episode than getting the real thing, that's the fact of the matter. Random generation wouldn't work, give it a high enough inaccuracy and you might as well have a telesnap reconstruction. The best you can do is have a computer estimate what should be happening between images.
|
|
|
Post by brianfretwell on Feb 24, 2014 13:29:03 GMT
Isn't this idea like watching an analogue TV with no signal and hoping that a picture will appear by change every few millennia?
Also if it did work what would we do if it found an episode from the 2020 series, would that make it the future scanner from the TARDIS?
|
|
|
Post by scotttelfer on Feb 24, 2014 13:32:25 GMT
Isn't this idea like watching an analogue TV with no signal and hoping that a picture will appear by change every few millennia? Also if it did work what would we do if it found an episode from the 2020 series, would that make it the future scanner from the TARDIS? That's an absolutely perfect analogy. You'd get so many rubbish images alongside utterly bizarre ones, that it would become a bit pointless unless you happened across that one correct image, and really the end result wouldn't be much better than doing some photoshop work on existing telesnaps.
|
|
|
Post by Alex Dering on Feb 24, 2014 15:27:00 GMT
One detail. The process wouldn't be "random." Trying to generate Shakespeare randomly takes an enormous amount of time, but if you allow a supervisory function in which each correct randomly typed letter is retained, you actually get the complete works pretty fast. Similarly, there are cheats that could be used even with current tech to cut down on the process. Standins could wear tracking devices that would allow the motion of the scene as well as the actual mouthshapes as the lines are said to be framed on computer and then overlaid with a "best guess" of the original. Put someone the same shape and size as Hartnell in the same outfit, have him cross the room doing his best Hartnell walk, and have a computer overlay a digitized Hartnell (enough footage of him must exist to be able to model him in any articulation). Then have people look at it and flag what parts look particularly odious (I suspect, as a purist at heart, that I'd find the whole thing odious). Rinse, repeat.
|
|