I'm about to launch a business that most people will misunderstand.
On paper, Digital Creative Alliances is a WordPress agency. We build websites. We charge money. We aim to be profitable.
But that's not really what I'm doing.
What I'm actually building is a proof of concept: Can you run a successful business in 2025 while actively refusing to participate in the worst parts of how modern tech operates?
This is my attempt to answer that question. And this post is my commitment—written publicly so I can't quietly abandon it when things get hard.
The Problem Nobody Wants to Talk About
We're accelerating toward something nobody fully understands.
AI isn't just "another tool." It's a fundamentally different kind of tool—one that gets exponentially smarter, consumes exponential energy, and makes decisions at scales humans can't comprehend.
And the people building it? They're either:
- True believers who think AGI will solve everything
- Profiteers who don't care what happens as long as they get rich first
- Concerned engineers who quit big tech because they see where this is going
I don't know which group is right. But I know this: we're running an experiment on civilization without informed consent.
The Uncomfortable Math
Every ChatGPT query. Every AI-generated image. Every "enhance this code" request.
It all requires:
- Data centers consuming megawatts
- Water for cooling (millions of gallons)
- Rare earth minerals mined in conditions we don't talk about
- Carbon emissions we're not accounting for
And it's growing exponentially. Not linearly. Exponentially.
Training GPT-3: ~1,300 MWh (equivalent to 130 US homes for a year)
Training GPT-4: Estimated 10-25x that
What happens at GPT-7? GPT-10?
We're burning the planet to make autocomplete smarter.
Why Most "Responsible AI" Talk Is Bullshit
Every tech company now has a "responsible AI" page.
They all say the same things:
- "We prioritize ethics"
- "We're committed to sustainability"
- "Human-centered design"
- "Transparent practices"
It's marketing. Not action.
Because here's the truth: if you're racing to AGI, you're not being responsible. You're in an arms race where slowness = irrelevance.
Microsoft, Google, OpenAI, Anthropic—they're all saying "we'll be responsible" while simultaneously saying "we must move faster than competitors."
You can't do both.
So most companies pick speed and pretend they picked responsibility.
What I'm Actually Trying to Do
I'm not trying to solve AI's existential problems. I'm one person launching an agency in Cambodia. I can't fix this.
But I can refuse to participate in the worst parts.
Here's my actual commitment:
1. Use AI as a productivity tool, not a replacement strategy
What I mean:
- AI helps our team work faster (content drafts, code scaffolding)
- Humans make all final decisions and review all outputs
- We hire people and train them to use AI effectively
- We don't eliminate jobs to increase margins
Why this matters:
Most agencies will use AI to fire people and pocket the difference. I'm
using AI to make my team more effective so I can hire MORE people at
fair wages.
2. Partner with infrastructure providers who have verifiable sustainability commitments
What I mean:
- OPTe uses AWS/Azure (both have public renewable energy goals)
- Cloudflare (carbon neutral since 2018)
- OPTe's multisite architecture reduces redundant server usage by orders of magnitude
Why this matters:
Every website I build sits on servers somewhere. Those servers consume
energy. I can't eliminate that, but I can choose partners who are at
least trying to use renewable energy.
3. Be transparent about what I don't know
What I mean:
- I don't know if my approach is "right"
- I don't know if it scales
- I don't know if I'll stay profitable while doing this
- I'll document what works and what doesn't, publicly
Why this matters:
Most people pretend they have all the answers. I don't. This is an
experiment. If it fails, at least the data will be useful to whoever
tries next.
4. Build for decades, not exits
What I mean:
- I'm not building to sell in 5 years
- I'm not optimizing for maximum extraction
- I'm building a business I can run for 20+ years
- Profit is necessary, but it's not the only goal
Why this matters:
Most startups optimize for investor returns and quick exits. That
creates perverse incentives. I'm private, bootstrapped, and optimizing
for longevity. Different game.
The Cognitive Dissonance I'm Living With
Here's the uncomfortable part: I benefit from the system I'm criticizing.
I use Claude (Anthropic's AI) to draft content. I use ChatGPT to debug code. I use Midjourney for design inspiration.
I am part of the problem.
So why do this at all?
Because the alternative is worse.
If responsible people refuse to use AI, we don't stop AI. We just ensure that ONLY irresponsible people control it.
So my position is: Use it minimally. Use it deliberately. Use it transparently. Pay for it (don't use free tiers that make you the product). And constantly question whether you actually need it.
Is this hypocritical? Maybe.
Is it better than uncritical acceleration? I think so.
What Success Looks Like (And What Failure Looks Like)
Success Scenario (10 years from now):
- DCA is a profitable, sustainable agency employing 50-100 people across Southeast Asia
- Our clients are businesses that share these values and chose us because of them
- We've published annual transparency reports showing our environmental impact
- Other agencies have copied parts of our model (even if they don't credit us)
- I still believe the work matters
Failure Scenario 1 (Compromise):
- DCA is profitable but I've abandoned the principles to get there
- We're using AI to replace people, not augment them
- I'm saying "responsible AI" in marketing but cutting corners in practice
- I've become what I criticized
Failure Scenario 2 (Irrelevance):
- DCA fails because nobody wants to pay for "responsible" work
- Competitors eating our lunch by being cheaper/faster/less principled
- I run out of money and have to shut down
- The experiment proves that you can't succeed this way
I don't know which scenario happens. But I'm committing to find out.
Why I'm Writing This Publicly
Most founders keep this stuff private. They have a public persona (confident, certain, successful) and a private reality (uncertain, struggling, improvising).
I'm reversing that.
I'm publishing my uncertainty. My concerns. My cognitive dissonance.
Why?
Because accountability matters.
If I write this publicly, I can't quietly abandon it later. If someone asks me in 2030 "did you do what you said?" this post will be here.
Because I want allies, not just customers.
If you read this and think "this guy is naive/idealistic/foolish," we're not aligned. That's fine.
If you read this and think "fuck yes, someone's actually trying," then maybe we should talk.
Because transparency is the only moat I have.
Big agencies have money, scale, networks. I have honesty. That's my differentiation.
The Invitation (To Anyone Who Actually Read This Far)
This isn't a pitch. I'm not trying to sell you anything.
But if you:
- Care about these issues
- Are building something similar
- Have knowledge I need
- Want to collaborate on solutions
- Or just want to tell me I'm wrong
Reach out.https://digitalcreativealliances.com/creative-web-design-cambodia-contact/
I don't have answers. But I'm asking better questions than most people in this space.
And I'm documenting the journey.
What I'm Committing To (The Measurable Stuff)
By 6 months (May 2025):
- Launch DCA officially with first 5-10 clients
- Publish our AI usage protocols (exactly how we use it, limits we set)
- Document our energy consumption per client site (if OPTe shares data)
- Write honest retrospective on what worked / what didn't
By 12 months (November 2025):
- Employ 5-10 people at fair wages (above Cambodia market rate)
- Publish full transparency report (revenue, costs, environmental metrics)
- Open-source our agency operations playbook
- Host workshop on sustainable web practices in Phnom Penh
By 5 years (2030):
- Build to 30-50 person team
- Still operating on these principles (or have documented why I changed them)
- Prove you can be profitable AND responsible
- Or admit I was wrong and show the data
The Bigger Picture (Why This Actually Matters)
One agency in Cambodia changing how they use AI doesn't fix climate change.
But here's what I believe:
Systems change through edge cases.
Patagonia started as one outdoor clothing company refusing to optimize purely for profit. Now "conscious capitalism" is a category.
TOMS started as one shoe company giving away shoes. Now "social entrepreneurship" is taught in business schools.
Tesla started as one car company making electric vehicles aspirational. Now every car company has an EV strategy.
Edge cases become categories. Categories become systems. Systems change culture.
I don't know if DCA becomes an edge case that matters. But I know this:
Someone has to try building differently. Why not me? Why not now?
A Note to My Future Self
If you're reading this in 2030 and you've compromised everything:
You knew better. You wrote this when you still believed it was possible. Don't make excuses.
Either do it, or admit you failed and explain what you learned.
If you're reading this in 2030 and you've succeeded:
Don't get comfortable. The next generation is watching. Keep evolving. Keep questioning. Keep building differently.
If you're reading this in 2030 and the whole thing collapsed:
At least you tried. At least you documented it. At least someone can learn from your failure.
The worst thing you can do is nothing. You chose something.
To Everyone Else
You don't have to agree with me.
You don't have to support DCA.
You don't even have to think this matters.
But if you're building something—anything—ask yourself:
What are you optimizing for? And is it worth it?
Most people never ask that question. They just optimize for whatever the system tells them to optimize for.
Money. Growth. Scale. Exit.
But you can choose different optimization functions.
I'm choosing: Build profitably while minimizing harm and maximizing human dignity.
Maybe I fail. But at least I chose deliberately.
This is version 1.0 of this thinking. I'll update it as I learn more. Follow along at [blog link]
If you want to build something similar, steal these ideas. I don't own them. I'm just trying them first.
If you want to tell me I'm wrong, bring data. I'll change my mind if you're right.
If you want to help, reach out. I don't have all the answers. Nobody does.
Let's figure this out together.
Mosses Chan
Founder, Digital Creative Alliances
Siem Reap, Cambodia
November 2024
P.S. — If you're from a big tech company and you think I'm being unfair, you're probably right about some details. But you're wrong about the trajectory. And deep down, you know it.
P.P.S. — If you're a competitor reading this thinking "this guy is naive, we'll crush him," you might be right. But you'll have to live with optimizing for things that don't matter. I'd rather fail trying something that matters than succeed doing something hollow.
P.P.P.S. — If you're someone who cares about this stuff and feels alone in caring, you're not alone. There are more of us than you think. We're just quieter than the loudest voices in the room.