At Revolver, we’ve recently decided to delve deep into AI photography—expanding our creative pursuits into a space where, more often than not, a camera isn’t necessary. This is a significant shift, and one that will inevitably progress as AI quality, prompting techniques, and creative understanding continue to evolve.
For now, our goal is simple: to create photography from scratch and see what happens. Our first test run has provided valuable insights. While AI advancements seem rapid, they are still not at the level needed for enterprise-level creative work, consistency in modeling, and environmental accuracy. However, the potential is undeniable.
The Vision
We are fans of ‘90s aesthetics—skateparks, downtown cityscapes, abandoned buildings, and nostalgic pop culture. For our first real experiment in AI-generated photography, we wanted to work within this familiar territory. Our chosen theme? A ‘90s-era skatepark in California.
To challenge AI while keeping the scope feasible, we aimed for specific visual elements:
• Retro lens flares
• Film grain texture
• A liminal, dreamlike quality
• No recognizable people or objects—just atmosphere and feeling
The Tools
To bring this vision to life, we opted for a fully digital workflow, eliminating the need for a traditional camera and location scouting. Our tools of choice:
• AI Image Generator: OpenArt (DPM++ 2M SDE Karras model)
• Image Editing Software: Adobe Photoshop
The Prompt
We wanted to keep the process as automated as possible, avoiding excessive manual intervention. Using OpenArt, we selected a public model—‘90s Flash Photo—which had previously helped us achieve the nostalgic look we envisioned. Our prompt? As simple as possible: “skate park.”
The Process
Generating the Images
We used OpenArt’s AI image generator to produce 33 images based on our prompt.
Upscaling
Immediately after generation, we upscaled the images 4x. Originally created at 768×680 pixels, they were enlarged to 3072×2720 pixels. While these dimensions may seem unconventional, we embraced the uniqueness.
Organization & Selection
We downloaded and organized all 33 images, then curated a selection of 12 to focus on for further refinement.
AI-Assisted Editing
To keep human intervention minimal, we leveraged Photoshop’s built-in AI generative tools to enhance and refine the images.
The Results
Example 1: The Basics
In the first example, we examined areas that were clearly incorrect and used Photoshop’s generative editing to fix them. This included poles, fences, light posts, and assembling straight lines—something AI struggles with. We debated tweaking the lens flare effects but opted to leave them as-is, prioritizing efficiency.
The result? Not bad. While it wouldn’t replace a real photographer on location, for quick social posts, a video frame, or animation background, this could work. It looks fine at a glance—just don’t study it too closely.
Example 2: AI Hallucinations
AI hallucinations can get weird. In this second example, a four-legged skateboarder had to go, along with some nonsensical graffiti. The fix? Photoshop’s AI-assisted tools.
The trees behind where the skater initially was turned out surprisingly well. However, we ran into a major AI flaw: an inability to generate straight lines. The fences were abstract, distorted, and refused to align properly despite multiple attempts. After 30 minutes of editing, we moved on.
Example 3: The Best Outcome
By the time we reached the tenth image, we noticed a trend—AI-generated images require extensive post-processing, and Photoshop’s generative tools work best in small, targeted areas.
This example needed heavy edits to make sense. AI struggled to remove certain components without disrupting the lighting effects, so we made manual adjustments for consistency. The final result? Decent, but still requiring intervention.
So, Do We Fire Bob?
Well, he should update his portfolio.
Looking at the bigger picture, we completed this entire test in about three hours, including planning, image generation, editing, and final production. If this were a real-world project, we would have needed location scouting, weather considerations, camera equipment, potential licensing, staff, and additional production costs.
So, what’s the trade-off? AI tools got us about 70-75% of the way to our desired look and feel.
With extensive editing, we might have reached 90-95% accuracy. But would that be enough?
As AI tools continue evolving, the photography industry—especially commercial photography—may be in for a major shakeup.
What we think
AI photography is evolving rapidly, but it’s not quite ready to replace traditional photographers—at least not yet. Our experiment shows that while AI-generated images can get us close to the desired look and feel, they still require significant editing and refinement. Issues like distorted lines, hallucinated objects, and unpredictable artifacts make AI tools unreliable for high-end creative and commercial work.
That said, the speed and accessibility of AI-generated photography present a compelling case for certain applications, such as social media content, quick design mockups, or supplementary visuals. As AI tools improve in precision and consistency, the industry will have to adapt. Photographers, editors, and creatives may need to shift their roles from pure image creation to curation, refinement, and art direction.
So, should Bob be worried? Not today. But he should definitely start thinking about how to work alongside AI rather than compete with it.