There's information here on how to train models: https://github.com/xinntao/BasicSRAre there any tools that make it easy to train your own neural network simply by providing sets of before/after images?
I can think of quite a few ways that I'd like to use it, and the volume of source material required seems like it probably would not be an issue - at least for some of them.
Left click on them.Are these pictures suppose to be before and after? I dont see a difference.
Are these pictures suppose to be before and after? I dont see a difference.
There's information here on how to train models: https://github.com/xinntao/BasicSR
I stopped looking into it once I realized how large the training sets need to be.
The size of the dataset required would not be an issue for the sort of training I was thinking of doing, as it was related to image compression for a specific type of image (rather than photographs) which is easy to generate a large dataset for, as you just need high quality source images and can automate the creation of varying degrees of compressed images.There's information here on how to train models: https://github.com/xinntao/BasicSR
I stopped looking into it once I realized how large the training sets need to be.
Credit to Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi/Arxiv on their paper about Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. I suggest you give it a read if you're interested in this kind of thing.
Great stuff!Hey! Thanks for using my Resident Evil image in OP. I just made an account cause I was googling around a bit and found this post. This upscaling thing is insane and I have been doing a lot of work in the past few days, trying out different methods of approach and I have seriously learned so much in such little time.
Oh. Here is 80 more Resident Evil Remake Images processed with Kingdomakrillics Manga109 model.
https://imgur.com/gallery/N0RgHOt
Hold it in front of a mirror and magic happens!
Is this real time post-processing? Or just screenshots that then gets processed via the effect ?
It's screenshots and original artwork, not real-time unfortunately. I'm using an RTX 2080 Ti and the longest processing time I've seen is about 8 seconds, so there would be a very long way to go to have this working in real time. Maybe one day.
I mean, what else would you want, if the AI is for image processing and upscaling.Damn. so the only implementation into a game will be texture mods.
Or backgrounds, assets redone in a point-click adventure like Monkey Island or REmakeDamn. so the only implementation into a game will be texture mods.
Damn. so the only implementation into a game will be texture mods.
In a world with infinite processing capability, not needing to have it all preprocessed and stored at multiple times the disk space of the original images would be useful. Or in more modern games that already have high res textures, it would be useful (and take less RAM) if it could selectively do it only when it knows specific textures are being displayed very zoomed in.I mean, what else would you want, if the AI is for image processing and upscaling.
'pip3' is not recognized as an internal or external command,
operable program or batch file.
I'm trying to install it, I get to the windows CMD line but it says
I installed Python 3.7 and CUDA 10- anyone know what I did wrong?
Are these pictures suppose to be before and after? I dont see a difference.
This is unbelievable.That's kind of the point though? I am using the Manga109 model which is meant for illustrations more than anything. It's intentionally a lot softer. The model that comes with ESRGAN just doesn't compare especially when it comes to backgrounds of LucasArts adventure games. If you want me to get back to using what I was using before then by all means I will but I just wanted to mess around with this new model a bit.
Still though, if this isn't impressive then I don't know what is.
that's honestly so dope great job dude
i think the remaster image is more polished overall since the AI one has a couple of aliasing upscaling errors but goddamn that is fucking impressive considering that not a single human was involved in the process(well i guess one had to press the button lol). This stuff is gonna streamline remasters development and high resolution content(for 4k for example) to a significant degree.This is unbelievable.
Versus the remaster multiple human beings had to put real work into (youtube screengrab):
Transparency issues in the textures. But man it's pretty easy to use this
So you took the video, converted every frame into a PNG, then reconstructed it with audio? That's really impressive.Yeah doesnt work so well like that :D
Needs to be each texture.
incredibleSekiro: Shadows Die Twice. 1080p promotional screenshot. ESRGAN model. 4K downsampled for wallpaper. Cropped, full scale comparison:
Bicubic resampling:
ESRGAN: