|
Post by jameswale85 on Aug 2, 2023 20:02:30 GMT
Would be fun if they could use Ai to do anything with the telesnaps from Feast of Steven. I think we can safely say that’s never coming back, so what’s to lose? Even in the current uncanny valley state these Ai estimations are no odder than some of the poor animations. A couple more years and it should be far more accurate. That would be my suggestion, using Hartnells piece to camera that was telesnapped by the dalek operator. Great minds!
|
|
|
Post by andrewfrostick on Aug 2, 2023 20:46:11 GMT
Would be fun if they could use Ai to do anything with the telesnaps from Feast of Steven. I think we can safely say that’s never coming back, so what’s to lose? Even in the current uncanny valley state these Ai estimations are no odder than some of the poor animations. A couple more years and it should be far more accurate. technically they aren't telesnaps but i'm sure something could be done, might even be able to do even more if someone used frames from Z-Cars episodes to helo fill in the police scene a bit too You are correct. I was really thinking of Robert Jewell’s off screen photos which he took of FofS. That said they are not as comprehensive as the official Telesnaps
|
|
|
Post by jcoleman on Aug 4, 2023 22:52:17 GMT
Something that AI might be able to tackle in the near future is recreating the missing information from skip-field video and field-suppressed film telerecordings. I can imagine a human artist doing a decent job on a single frame, but it might take days, and there's thousand of frames on a reel. There's also the problem of the occasional out-of-phase film inserts that have double images, which hasn't been cracked yet as far as I know. That’s a very interesting avenue imho 👍 I wonder if it might help recolourisation of, for example, IotD1? There are another five episodes to learn from. I stumbled across this earlier. Doctor Who: The Chase (Colourised With Ai)"This Doctor Who colourisation was Created with an Ai based colourisation tool. Manual frame by frame colourisations can take days for one scene, this Ai technique takes only hours. With other Ai video colourisation such as Deoldify a black & white video is uploaded to the model and a colour video is produced. With this method each black & white shot is uploaded along with a colour reference image (for this I took and screen grab of each shot and colourised it with a different ai, plus clean up in photoshop). This allows for a lot of control over the final colourised video, allowing for more accurate colours, and a better looking final product."
|
|
Ace St.John
Member
Enter your message here...
Posts: 139
|
Post by Ace St.John on Aug 5, 2023 22:47:28 GMT
and here is the whole episode colorized with a different a.i programme by the two people who created the colourisation program on computer. Almost all b&w episodes have been done by the same colorists and are available right now on Bitchute and Internet Archive www.bitchute.com/video/MhwlNyiX32p2/
|
|
|
Post by Rob Moss on Aug 8, 2023 15:02:47 GMT
and here is the whole episode colorized with a different a.i programme by the two people who created the colourisation program on computer. Almost all b&w episodes have been done by the same colorists and are available right now on Bitchute and Internet Archive www.bitchute.com/video/MhwlNyiX32p2/I still find that unwatchable - the early AI efforts were far worse, insofar as they were much less stable, but there’s still a lot of wild colour variations between frames, and skin tones are still horrifically flat.
|
|
|
Post by simonashby on Aug 8, 2023 22:48:33 GMT
Re. colour - I really don't think simply relying on AI to do the work is the answer. We know with the recent Hancock colourisations that the number of manually-coloured key frames required has reduced considerably since the Mind of Evil 1 was colourised manually. That's where AI will/does shine with colourisation.
The problem with AI is that you can let it run and it'll do 'things' and we all go 'wow'. However in the recreation of footage (or indeed colour) we want the AI to do some pretty specific things - especially if it's for an official release.
The relationship of manual input and AI is key to quality and intentional output.
|
|
|
Post by Sue Butcher on Aug 9, 2023 2:47:18 GMT
If convincing video of missing television could be made using telesnaps and a human artist working with AI, who would own the copyright? The basic working material - the soundtrack, the stills, and the original camera script - would belong to the original creators or their successors, but what about the results?
|
|
|
Post by scotttelfer on Aug 9, 2023 5:46:13 GMT
If convincing video of missing television could be made using telesnaps and a human artist working with AI, who would own the copyright? The basic working material - the soundtrack, the stills, and the original camera script - would belong to the original creators or their successors, but what about the results?
Derivative works are covered in copyright law. The creator of the derivative work would have copyright over that derivative work, but the ability to create that work depends on the rights holders of the original work. However, it is also the case that the original rights holder cannot then claim the derivative work as their own or use it without permission of the other party.
|
|
|
Post by lousingh on Aug 10, 2023 12:44:05 GMT
How hard would it be to take a re-enactment like what was done with "Mission to the Unknown" and used the telesnaps and other production pictures to animate the original actors, and then using the AI to do a full reconstruction? I just wonder how the end credits should be handled in that case.
|
|
|
Post by jimhope on Aug 10, 2023 15:01:59 GMT
All i can say is thank god i'm not in favour of having 60s Dr Who colourised in any way possible it looks awful
|
|
|
Post by andyparting on Aug 10, 2023 17:53:15 GMT
All i can say is thank god i'm not in favour of having 60s Dr Who colourised in any way possible it looks awful Jim, when it comes to waiting for missing episodes to arrive, one couldn't ask for a better surname than yours.
|
|
|
Post by simonashby on Aug 10, 2023 22:27:01 GMT
How hard would it be to take a re-enactment like what was done with "Mission to the Unknown" and used the telesnaps and other production pictures to animate the original actors, and then using the AI to do a full reconstruction? I just wonder how the end credits should be handled in that case. For a quality and useable result - hard. Or let's say, much harder than many here think.
|
|
|
Post by lousingh on Aug 11, 2023 12:26:02 GMT
How hard would it be to take a re-enactment like what was done with "Mission to the Unknown" and used the telesnaps and other production pictures to animate the original actors, and then using the AI to do a full reconstruction? I just wonder how the end credits should be handled in that case. For a quality and useable result - hard. Or let's say, much harder than many here think. That makes sense. The integration of the 3D renderings with the live action looks very arduous with the current state of the art.
|
|
|
Post by jimhope on Aug 16, 2023 14:06:33 GMT
All i can say is thank god i'm not in favour of having 60s Dr Who colourised in any way possible it looks awful Jim, when it comes to waiting for missing episodes to arrive, one couldn't ask for a better surname than yours.
|
|