Design x Ammon Haggerty

Co-Founder and VP of Design at Formation

 
Illustration by Bonnie Kate Wolf

Illustration by Bonnie Kate Wolf

 

Interview conducted by Sebastián Ortega on September 10, 2020.

Tell us about yourself and how you fell into the San Francisco rave scene.

I’m a fifth-generation Bay Area native who grew up in a family of musicians. My father, mom, stepdad, siblings, cousins, aunts, and uncles were and are all professional musicians. Although I chose a different path, music is something that’s been a core part of my life.

In 1990, I was living in Oregon after spending a couple of years chasing the snow with a band of semi-pro snowboarders. When I moved back to the Bay Area, I took an overnight bus that arrived in San Francisco at around 6 a.m. I remembered that my close friend Buck was DJing at a party called “ToonTown” the prior evening, but knew it would go late and thought I might catch him leaving. I arrived at Club 181 Townsend and found the place packed with people dancing and smiling freely—the energy was ecstatic. This was my introduction to the San Francisco rave scene, and I knew at that moment it would be the next chapter in my life.

My introduction to graphic design was creating a club flyer for Buck’s weekly party at Club DV8 called “Church.” I used the Mac & Cheese box graphics and changed it to say “Macaroni & Church.” To my knowledge, this was before the trend of appropriation of pop elements in rave fliers. Most flyers in those days were hand-drawn and Xerox-copied.

Graphic by Ammon Haggerty

Graphic by Ammon Haggerty

What else did you make back then?

I did a bunch of flyers, album covers, and t-shirts, but I have to say I’m not proud of my designs from then. Looking back, I feel the interesting nature of my journey as a designer was stumbling into things and figuring them out. I was very naïve, but I loved the challenge and loved being creative. There were no boundaries, I was always promoting and sharing my designs. I like that aspect of my journey as a designer—not comparing myself to others and not really caring what people think about my work. 

I helped design t-shirts and marketing materials for a company called Anarchic Adjustment, one of the premier rave fashion companies. We created clothes for a lot of the big acts, including The Shamen and Towa Tei of Deee-Lite. Anarchic was one of the go-to brands of that time for rave culture. It was very popular in Europe and Japan.

In 1995, some of my design work was included in a German book called Localizer. It was the first major book I was aware of to showcase the graphic design of the global rave scene. There is even a picture of me in there as a young raver, which is funny. It was an honor to share the pages with one of my design heroes of that era, The Designers Republic. This was a turning point for me as a designer—seeing acknowledgement of my creative output. 

How did your interest in the underground rave scene connect with the way you articulate your design decisions?

For the first half of my career, music was deeply intertwined with my work and my design philosophy. It was hard for me to separate it. I was also DJing three to four nights a week for many years. Music—not just the rave scene—was the core inspiration of all my design work, whether it was inspired from album art or the feelings and moods of the music I was listening to. I drew inspiration from the culture and visuals I was seeing at parties, and I would use them as an anchor to explore new ideas. In the past decade, as my focus has shifted towards people management and business strategy, there has been less direct influence from music, but it’s still a very important part of my life.

 

 

“I drew inspiration from the culture and visuals I was seeing at parties, and I would use them as an anchor to explore new ideas.”


 
 

What pulled you into artificial intelligence?

I started thinking about the relationship of AI and interaction design when I joined a company called Obscura Digital in 2008. When I joined, the team was a ragtag group of performers and technologists creating large-scale spectacles for corporate events—think projection mapping and music inside a 100-foot geodesic dome. They brought me in to lead their interaction and experience design team. I started thinking about how to bring a humanized and personalized experience to the environments the team was already creating. 

My first project at Obscura was to design and build a custom 18-foot-long, interactive “Memorabilia Wall” for the Hard Rock Cafe restaurants. The challenge was to create something that was both a large-scale spectacle and a highly personalized individual experience. Intelligence in the user experience, or at least the perception of it, was a part of the solution. A lot of the work at Obscura used some level of AI and advanced machine learning in the computer vision components, so my awareness grew from there.

Another exciting project at Obscura was for a company called Haworth, a global leader in office systems. They wanted us to explore and prototype “the office of the future.” We experimented with ideas around physical augmented reality and how these tools could enhance your abilities to think, create, and communicate. We began thinking about this notion of “bionics,” augmenting humans with superpowers. We ended up creating a tool for remote collaboration called Bluescape, which is commercially available, although it no longer reflects the original prototype we created.

Around this time, a friend introduced me to an amazing individual at Microsoft named Blaise Agüera y Arcas, a brilliant creative visionary and computer scientist. After talking with him about some of my projects and interests, he asked me to join his team. I accepted immediately. Blaise’s team had some of the most brilliant technologists and creative thinkers I have ever had the pleasure to work with, and there were several engineers and designers experimenting with advanced machine learning and AI. That’s when it really became a part of my understanding, interest, and vocabulary. For a while, I worked on Cortana, a conversational agent like Siri. I worked on HoloLens in the early prototyping days. We worked across many experimental projects to explore how intelligence and contextual understanding could transform interaction design.

Do you think machine learning can give humans the “bionics” to improve the government or the American society?

I can see a potential for positive outcomes, but I also see the world moving in the wrong direction at the moment, and I don’t have a lot of faith that moral decisions will ultimately prevail. The primary focus of the people driving machine intelligence outcomes is short-term gain, which does not prioritize the long-term health of society. I am hopeful this could change, as AI may “educate” us about where our priorities should be.

What ideas do you not have faith in?

At Formation, a company I co-founded, I think about customer experience as a byproduct of machine intelligence, and we see the direct results of short- versus long-term optimization. While it may seem obvious that a long-term happy customer would be the most valuable outcome, most large corporations are geared towards quarterly, weekly, or daily revenue goals, and there are expectations within organizations for continuous and rapid revenue growth.

That’s been the discouraging side of AI. In the corporate context, AI is largely trained to extract as much wealth as it can, with little regard for the consumer. The systems are not being trained to find customer and business harmony, even if it means the value will be short-lived. AI and machine learning, in this case, become a reflection of the values and objectives of the business, and in the US the general goal is domination rather than sustainability or societal benefit.

Do you think big business and machine learning will be perceived as synonymous, if major corporations continue profiting this way?

I definitely see that now, especially in the financial markets. Much of the core decision-making is being driven by machine intelligence. We’re seeing an interesting example with the pandemic where the Federal Reserve can manipulate and stabilize the markets by gaming the indicators that drive volatility. It’s probably an oversimplification of what’s going on, but it’s a good example of where politics, corporate interests and machine intelligence are highly intertwined.

Companies like Amazon and Facebook leverage advanced machine intelligence to steer consumer behavior and optimize every corner of their business. As their dominance grows, they’re able to optimize their profitability across a very large domain, which makes competition from smaller players nearly impossible.

That’s the part that gives me the most concerns, this singular focus on extracting value. But I don’t want to focus entirely on the negatives. There are also many positives. I think many people are obsessing over the negatives these days. 

I often think about how the experiences of engaging in life and our surroundings could become enhanced by augmented intelligence that knows you and your interests—like a guardian angel. You can think of it on a personal level like browsing the web, or on a macro-level like representation in government. I think about my kids, who are eight and 11 years old—they are entering a world that is an increasingly complex, interconnected world. It may become a necessity to have some level of personal AI augmentation just to deal with the level of complexity in the world—particularly dealing with technology.

What would a guardian angel for marginalized communities or non-corporations look like?

It’s challenging to introduce a digital proxy without re-introducing—and possibly amplifying—all the problems we have today.

 

 

“It’s challenging to introduce a digital proxy without re-introducing—and possibly amplifying—all the problems we have today.”


 
 

At Microsoft, we explored and developed projects along these lines, and one concern is the impact of an AI arms race, where more and more sophisticated AI is developed to outsmart the opponent AI, and that could exacerbate inequality. For this reason, we explored the importance of a trusted, non-commercial, non-competitive foundation for intelligent agents. If a personal AI agent is engaging with a corporate or governmental agent, it’s important that it’s a level playing field, grounded in trust, transparency, and mutual value. The UX side of this problem space is fascinating. 

These ideas of the AI guardian angel sound great to implement. Imagine if two different people’s guardian angels met and learned from each other to improve for future situations.

Yes! Learning and adapting is an interesting aspect of an AI. 

The biggest challenge is trust. That was one of the key insights that came out of my research at Microsoft. Whether these experiences succeed depends on whether we can build them inside a trusted framework. We are already seeing a decay of trust in large companies—whether it’s Google, Amazon, or Facebook. They have highly intelligent and personalized layers of information that are providing value and extracting monetary value from users. But they don’t have a framework that’s harmonious with the people who are using these applications. 

These large data-centric corporations have enough information to predict your future more accurately than you can. They can anticipate what you’re going to do weeks, possibly years, in the future. Google is sitting on an incredibly rich set of data representing a large percentage of the world’s population. They can only take action on a fraction of their customer insights, because doing so would be an enormous breach of trust. If they had built their customer data relationship with empowerment and trust at its core, they could bring ideas like the guardian angel concept to life today, and other radical ideas that augment people with “bionic” superpowers. At this point, most of these corporations don’t have the trust to leverage the data goldmine they’re sitting on. That’s a huge missed opportunity.

Microsoft is one of the few companies in a better position regarding trust than maybe Apple. When I was at Microsoft, we were exploring an early version of the guardian angel concept. The system would aggregate all of your personal data, then you would train an AI agent on your preferences for privacy, trust, value exchange, expectations, etc. In this concept, AI agents would live in service endpoints or augment service workers. 

An example of this in use might be someone visiting Nike.com. Before the site loads, a negotiation between your agent and Nike’s agent would determine what data was available, and the value for the exchange. When the site loads, you’d see a highly personalized experience that may have special accommodations for things like health and wellness consideration. There would also be a discount for the data you provided. This same scenario could also show how an agent could protect your privacy by preventing the Nike agent from getting any data at all, including anonymous data like IP, location, browser, etc. This agent can serve as a wall and a data conduit. 

I love this concept and want it to be real—there’s so much potential! There’s also an enormous amount of important design work that needs to be considered when building these types of ideas.

What tips would you give people starting out in design or machine learning?

Much of my education and development has come from creating projects I’m excited about, and using these passion projects to dive into new tools and domains. I’m not a conceptual learner—I learn from creating, failing, and persevering.

 

 

“I’m not a conceptual learner—I learn from creating, failing, and persevering.”


 
 

One tip I share quite a lot, especially to designers, is to focus on the work they’re most passionate about and do whatever it takes to pursue that work. Interning is an amazing way to accelerate your skills. You can get two years worth of school compressed into a couple months of interning with a talented team. 

And curiosity—be ruthlessly curious.

 

Connect with Ammon.

 

Sebastian Ortega.png

ABOUT THE INTERVIEWER

Sebastián Ortega

Brand Team at Design x Us

Previous
Previous

Design x Bobbie Wood

Next
Next

Design x Linda Eliasen