|
Post by brianfretwell on Feb 28, 2024 8:40:52 GMT
That’s a good point but I’m unsure if lip readers were considered important. Does anyone know? I don't know about lip readers but in a noisy household that helps everybody.
|
|
|
Post by barneyhall on Mar 3, 2024 10:39:47 GMT
So there's a Yech youtuber I follow who does a weekly news cast. They regularly have a section on AI, this week they were showing a new type of software called emote. In particular this is exciting as it animates people talking off a single still image. Heres a link to a very short clip showing the software working youtube.com/clip/UgkxkvF9wg9POvgQAJ1_QOZ0iLDLZMcFNtHC?si=u5LUrQqTaneznqhWOr if you want to see more about it www.youtube.com/live/qs-sYNsqPYA?si=t43BHRo5vRGhx1Kw At about 2hour and 5 mins in and they discuss it for around 3mins. No its not perfect and no it's not being applied to doctor who...but this kind of AI vdiea generation hasn't been around long at all, and can already do this from one single image. We still have a long way to go but I honestly think the future of reconstructions lies from extrapalating telesnaps into movement and then adding from there
|
|
|
Post by jamesvincent on Mar 25, 2024 23:19:06 GMT
If no one else is going to try (I'm looking at you Josh Snares) I'll give it a go sometime this year.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 30, 2024 13:37:59 GMT
If no one else is going to try (I'm looking at you Josh Snares) I'll give it a go sometime this year. Joshua doesn't even have an account here.
|
|
|
Post by Sara Irving on Mar 31, 2024 8:27:45 GMT
If no one else is going to try (I'm looking at you Josh Snares) I'll give it a go sometime this year. Joshua doesn't even have an account here. It's a public forum, Josh doesn't need an account to view posts. I'd been reading threads on here for years before I signed up for an account to comment. I watched some parts of the AI'd DMP3 the other day. A promising start, but IMO there's a lot of work to get it to a more useable point. It's very obvious the starting point was a telesnap reconstruction, and that the basis was still photos. Putting aside my own views on the potential future consequences of AI, it'll be interesting to see what people come out with over the next 6-12 months as the technology advances more, especially with more input (or perhaps giving the AI more freedom when it comes to not directly following something), as I found the interactions between characters to be the most distracting, especially in cases where it was just jumping between close-ups of a character speaking.
|
|
|
Post by awpeacock on Mar 31, 2024 9:21:39 GMT
Joshua doesn't even have an account here. It's a public forum, Josh doesn't need an account to view posts. I'd been reading threads on here for years before I signed up for an account to comment. I watched some parts of the AI'd DMP3 the other day. A promising start, but IMO there's a lot of work to get it to a more useable point. It's very obvious the starting point was a telesnap reconstruction, and that the basis was still photos. Putting aside my own views on the potential future consequences of AI, it'll be interesting to see what people come out with over the next 6-12 months as the technology advances more, especially with more input (or perhaps giving the AI more freedom when it comes to not directly following something), as I found the interactions between characters to be the most distracting, especially in cases where it was just jumping between close-ups of a character speaking. Which is why I think the animations will have a part to play - if the AI has "prompts" of how to fill in the gaps and add movement surely the efforts will be a lot less stunted?
|
|
|
Post by Sara Irving on Mar 31, 2024 10:00:05 GMT
It's a public forum, Josh doesn't need an account to view posts. I'd been reading threads on here for years before I signed up for an account to comment. I watched some parts of the AI'd DMP3 the other day. A promising start, but IMO there's a lot of work to get it to a more useable point. It's very obvious the starting point was a telesnap reconstruction, and that the basis was still photos. Putting aside my own views on the potential future consequences of AI, it'll be interesting to see what people come out with over the next 6-12 months as the technology advances more, especially with more input (or perhaps giving the AI more freedom when it comes to not directly following something), as I found the interactions between characters to be the most distracting, especially in cases where it was just jumping between close-ups of a character speaking. Which is why I think the animations will have a part to play - if the AI has "prompts" of how to fill in the gaps and add movement surely the efforts will be a lot less stunted? This is one thing I've been wondering about - what would you get if you combined an animation of a missing story with telesnaps and whatever other material is available, and told it to produce a live action recreation; I suspect it would give a much more human result. Web 3 anyone - see if the AI sorts the character movement. Whilst I'm very keen on stories that haven't been animated being done, that might help with the AI learning element, giving it a better basis to work off for stories not yet animated.
|
|
|
Post by Marie Griffiths on Apr 8, 2024 13:28:13 GMT
This is what prompted me. I'd love to see those guys use a prompt of Dr who as a demo.
|
|
|
Post by George D on Apr 8, 2024 16:18:11 GMT
One of the biggest problems I'm seeing with the dmp isn't the ai, but the effort to pick or create congruent photos to use the ai on.
Hopefully their skill increases
|
|
|
Post by jamesvincent on Jun 3, 2024 8:07:36 GMT
IMO this is how to do lost episodes until AI is better: youtu.be/nCkftxej5ygIn a way I think this improves on the original episode in ways AI could not (yet).
|
|
|
Post by Michael D. Kimpton on Jun 3, 2024 11:55:43 GMT
Or, on the other hand, we could just, you know, not use AI altogether.
A few computer input commands cannot animate better than a human being, and honestly shouldn't. Using computers to help with something one thing, but nowadays people are just completely fixated on computers doing it all for them. The Ice Warriors future with everyone relying on a computer telling them what to do is tragically becoming more and more realistic these days.
|
|
|
Post by awpeacock on Jun 3, 2024 12:40:22 GMT
Or, on the other hand, we could just, you know, not use AI altogether. A few computer input commands cannot animate better than a human being, and honestly shouldn't. Using computers to help with something one thing, but nowadays people are just completely fixated on computers doing it all for them. The Ice Warriors future with everyone relying on a computer telling them what to do is tragically becoming more and more realistic these days. The use of AI we're talking about here is to recreate the live action version of these shows, rather than animations. And we'd be talking about it simply because it would take years and years to recreate the live action stuff "by hand" whereas AI could potentially turn these around so we could actually watch these missing shows in our lifetimes.
|
|
|
Post by Michael D. Kimpton on Jun 3, 2024 16:48:59 GMT
Or, on the other hand, we could just, you know, not use AI altogether. A few computer input commands cannot animate better than a human being, and honestly shouldn't. Using computers to help with something one thing, but nowadays people are just completely fixated on computers doing it all for them. The Ice Warriors future with everyone relying on a computer telling them what to do is tragically becoming more and more realistic these days. The use of AI we're talking about here is to recreate the live action version of these shows, rather than animations. And we'd be talking about it simply because it would take years and years to recreate the live action stuff "by hand" whereas AI could potentially turn these around so we could actually watch these missing shows in our lifetimes. Given how Ian Levine's are turning out from the one I saw, I think its better to wait. No disservice to those who like em, but they look like Loose Cannon recons with 5% more lip movement and, with the deepest respect, I saw better lip synching in episodes of Captain Scarlet.
|
|
|
Post by George D on Jun 4, 2024 2:26:59 GMT
Or, on the other hand, we could just, you know, not use AI altogether. A few computer input commands cannot animate better than a human being, and honestly shouldn't. Using computers to help with something one thing, but nowadays people are just completely fixated on computers doing it all for them. The Ice Warriors future with everyone relying on a computer telling them what to do is tragically becoming more and more realistic these days. The days of hand animation have been gone for years. Especially on the dr who budget I think a lot of the problems weren't based on the ai but rather the stills chosen to work with and the higher standard we have for live action than animations. I think his dmp 7 has shown improvement and probably the most definite feast of Steven we have The one feature that I don't like is the constant zooming but perhaps that movement helps distract from imperfections of the ai The only way to get good at it is practice.. and I give him credit for that. (Although someone else is likely doing the work
|
|
|
Post by nathangeorge on Jun 5, 2024 7:29:15 GMT
There's already a lot of very poor Doctor Who out there and most of it is cringe inducingly bad. If AI was really that clever it'd tell the user, "this looks stinky plops, son. and nobody's gonna like it"
|
|