I once sat in a meeting where a VP spent 14 minutes explaining that we were “initiating a synergy-driven ecosystemic transition toward scalable operational realignment.”
What he meant was: we’re changing the org chart.
Nobody laughed. A few nodded.
The more complex his words became, the more seriously he was taken. The fewer people understood, the more people agreed.
Welcome to the modern workplace, where using big words has become a survival strategy — and where clarity is not a virtue, but a liability.
Let’s be honest: nobody talks like that at dinner. You don’t tell your partner, “Tonight I’m initiating a caloric intake optimization protocol.” You say, “I’m ordering Thai.”
But in business? Abstraction is armor.
The use of inflated, intellectual-sounding words often signals not intelligence, but insecurity. When clarity might expose a flawed idea, vagueness keeps it afloat.
“People fear being simple because simple is naked. It forces a conversation about substance.”
And substance is risky.
There’s a kind of performative complexity that lets you sound impressive without being understood. And when nobody understands, nobody pushes back. That’s not just communication — that’s strategy.
The Language of Access and Power
The French sociologist Pierre Bourdieu called it linguistic capital. The more you master the “code” of a system, the more access you get.
In corporate culture, that code isn’t poetry — it’s phrases like “scalability roadmap,” “value chain optimization,” “cross-functional integration.”
What’s the difference between saying “Let’s make this simpler” and “Let’s optimize the interface touchpoints”? Nothing. Except the latter sounds like you went to business school. And that’s the point. Language becomes a proxy for competence.
“If I can’t decode you, you must be smarter than me.”
This is how mediocre thinkers rise: they master the code before mastering the craft.
Strategic Vagueness and the Safety of Smoke
The more ambiguous a plan, the harder it is to measure. And the harder it is to measure, the harder it is to criticize.
Take this real example from a Fortune 500 internal memo: “We will deploy a dynamic reframing of our customer-centric paradigm to enable agile ideation.”
What does that even mean? Literally nothing. But it gives the impression of movement while saying nothing specific — which is perfect, if you want to avoid accountability.
This isn’t rare.
Stanford’s behavioral linguistics lab found in a study that the more abstract the language in performance reviews, the less likely negative feedback was to be included. Simplicity, apparently, feels too sharp. Too confrontational.
It’s no surprise, then, that abstraction becomes a shield. The vaguer you are, the safer you stay.
Steve Jobs was known for ruthless simplicity — “Real artists ship,” “It just works,” “1,000 songs in your pocket.”
Yet for every Jobs, there are ten consultants billing €200/hour to design “cross-platform activation frameworks.”
Why? Because clarity is threatening. It eliminates excuses. It closes the door on bluff.
Remember the Theranos pitch decks? Beautifully written, slickly vague. Had they stated clearly what they couldn’t prove, they’d have never raised a cent.
Sometimes, obscurity isn’t just style — it’s survival.
If you’re one of those people who thinks words should meanthings, here’s the bad news:
the game is rigged.
But the good news? Clarity, when used well, can become your signature.
You don’t have to play dumb. You just have to talk real.
Things to try:
Replace three buzzwords in your next presentation with real verbs. Watch the room.
Ask your team what a term really means — then wait.
Write your next strategy in one paragraph. No nouns longer than 10 letters.
Because in a field addicted to linguistic smoke, real words cut through like fire.
What’s the worst example of corporate nonsense-speak you’ve ever heard?
Drop it below. Let’s build the Museum of Jargon! 😉
Until next time, stay sharp.
Alex
At Kredo Marketing, we help brands simplify their message and amplify their difference. No Jargon. Just positioning that cuts through.
Let me tell you about a guy I’ll call Marco — not his real name, obviously. I’ve changed it to protect his privacy, though if you’ve ever worked in a large company, you’ve probably met a Marco or two.
We crossed paths years ago in a consultancy firm that lived and breathed spreadsheets. He was one of those rare voices in the room — a systems thinker, fast on his feet, allergic to clichés. The kind of guy who would raise his hand not to echo, but to question. Always respectful, but always sharp.
One day, after yet another internal strategy session — you know, the kind where the blandest roadmap gets hailed as ‘visionary alignment’ — Marco leaned in and muttered: “You know what this place rewards? Beige.”
Then he sat back and crossed his arms, half-smirking, half-exhausted.
It wasn’t the first time I’d heard something like that. But it was the first time it felt like a diagnosis, not a joke.
That phrase stuck with me. Because beige doesn’t scare anyone. Beige is safe. Beige is exactly what most companies want — even if they post on LinkedIn about disruption, courage, or innovation.
So here’s the uncomfortable thesis of today:
mediocrity isn’t accidental — it’s a strategic advantage in corporate systems that fear originality.
Let’s rip this apart.
Mediocrity Feels Safe (and Safety Sells)
Big organizations aren’t designed for brilliance. They’re designed for continuity.
They don’t want someone who questions the whole process. They want someone who can sit through 8 hours of meetings without raising a hand.
Why?
Because predictability scales. And exceptional minds? They tend to shake foundations.
Mediocrity becomes a performance. A carefully curated blend of consensus, jargon, and non-threatening ambition.
“Great idea, let’s circle back next quarter after we realign with our stakeholders.”
You want proof? Look at the case of Kodak. They literally invented digital photography. But the idea was buried internally for years — too radical, too risky, too disruptive to their existing business model. The engineers who pushed for it were sidelined. The company that played it safe went bankrupt.
Talent Is Often Seen as a Risk, Not a Resource
Ever noticed how the truly insightful person in a meeting is often ignored?
Not because they’re wrong. But because their insight doesn’t fit the pre-approved narrative. It’s easier to approve a “Best Practices 2.0” than someone suggesting a clean slate. Mediocrity doesn’t challenge status. It reinforces it.
Marco? He got sidelined for being “too intense.” Meanwhile, the guy who rephrased his points with more buzzwords got a promotion. I remember when Marco submitted a proposal to overhaul our entire client onboarding system. It was smart, cost-effective, and based on solid data. The response?
“Let’s wait until Q4.”
Two weeks later, his manager presented a watered-down version of the same thing. Guess who got recognition?
Another example: Elizabeth Holmes built an empire on vague promises and charismatic nothingness. Investors bought in not because the science was sound — but because the vision was comfortably packaged. Meanwhile, hundreds of medical innovators with real breakthroughs struggled for funding. Why? Because truth is messy, and mediocrity is easy to brand.
Vagueness Is the New Armor
Corporate-speak is a fortress. People hide behind phrases like “alignment”, “synergy”, and “value-driven execution” to avoid committing to anything measurable.
The more abstract your words, the harder it is to hold you accountable. And guess who masters this? The mediocre. They’re fluent in plausible deniability.
A study by Stanford’s Graduate School of Business found that leaders who use vague language are more likely to be perceived as competent, even if their actual performance lags behind.
Why? Because abstraction feels intellectual — even when it says nothing.
There’s a fast-track, and it doesn’t involve brilliance. It involves:
Learning the language of vague ambition
Avoiding clarity at all costs
Echoing leadership just enough to seem aligned, never disruptive
This isn’t incompetence. It’s adaptive behavior. And it works.
We could name names. But you’ve seen it firsthand. The manager who says “we’re on a transformational journey” without being able to explain what’s actually transforming. The VP of Innovation who hasn’t launched a new idea in years.
And yet — they keep climbing.
So What Now?
If you’re allergic to mediocrity, you have two options:
Blend and Climb: Learn the language, play the game, sell small parts of yourself until you forget where they went.
Build and Resist: Create your own space — whether through content, consulting, side projects, or micro-communities — where clarity and originality are not liabilities.
I’m not saying one is better. I’m saying: know the rules of the board before you play.
And if you still think merit wins by default, remember this: in 2020, research by Gallup showed that only 21% of employees strongly agree that their performance is managed in a way that motivates them to do outstanding work.
Most are just trying to survive the system.
Things You Can Start Doing Today
Write something that actually says something. No fluff. No SEO garbage. Just value.
Ask the one question in a meeting that makes people uncomfortable — respectfully.
Stop reposting safe content. Share something that shows you think.
Talk to other people who feel the same. There are more of us than you think.
Have you ever watched someone rise, not because they were great, but because they were perfectly forgettable?
Welcome back to another edition of Business Hacks & Theories! We hope you enjoyed our last issue, “Decoding the Cognitive Biases Hijacking Your Business Brain.”It was all about those sneaky little biases that can throw a wrench in your decision-making process. This time, we’re diving into something just as crucial:
how various industries are profiting off the increasing rates of depression in society.
Buckle up because we’re about to uncover some eye-opening truths!
What’s Inside This Issue?
Pharmaceutical Profits How Big Pharma markets antidepressants.
The Self-Help Boom The rise and controversies of the self-help industry.
Wellness Wonders or Blunders? Dietary supplements and their shaky scientific backing.
Tech and Therapy The digital age of mental health apps and online therapy.
Media Madness How movies, TV, and ads shape our mental health perceptions.
Social Media Traps The cycle of FOMO and targeted advertising.
Let’s Get Into It!
Pharmaceutical Profits: Big Pharma’s Big Wins
Alright, let’s kick things off with the pharmaceutical industry. Remember Prozac? Launched in 1987 by Eli Lilly, it was the first SSRI and revolutionized how we treat depression. Their marketing was genius, using emotive “before and after” images that painted a picture of transformation from sadness to joy. These ads appeared everywhere, from Family Circle to Good Housekeeping, helping Prozac become a billion-dollar blockbuster.
But here’s the kicker: many clinical trials for antidepressants have been criticized for being biased. Companies often highlight positive results while downplaying the negatives, giving us a skewed view of these drugs’ effectiveness and safety.
Prozac original ads
The Self-Help Boom: From Bestsellers to Bizarre
Now, let’s talk about the self-help industry—a billion-dollar behemoth. It’s all about selling books, courses, and seminars promising to change your life. Take James Arthur Ray, for instance. Known for his role in “The Secret,” he hosted a retreat that tragically resulted in three deaths. Or consider Napoleon Hill, whose book “Think and Grow Rich” sold millions despite allegations of fraud.
Then there’s Tony Robbins, the high-energy guru famous for his intense seminars. Critics argue that his methods can be emotionally manipulative, exploiting vulnerable people desperate for change. And don’t forget Jay Shetty, the former monk turned life coach, accused of plagiarizing content. Despite the controversies, these figures continue to thrive, selling hope in various formats.
Wellness Wonders or Blunders?: The Supplement Saga
The wellness market is booming with products like Raspberry Ketone and Ginkgo Biloba, marketed with grand promises but little scientific backing. These supplements claim to improve everything from memory to weight loss, but studies often don’t support these claims. For example, St. John’s Wort is touted for depression but can dangerously interact with other medications.
Raspberry Ketone is often promoted as a miracle weight loss supplement, but the science is not on its side. Most studies supporting its efficacy are either conducted on animals or are too limited to be conclusive. Yet, marketing tactics include celebrity endorsements and dramatic before-and-after photos to lure in desperate consumers (Mayo Clinic).
Ginkgo Biloba is sold as a memory enhancer. Despite its popularity, extensive research shows minimal benefit. Companies market this supplement using imagery of vibrant elderly individuals and claims of cognitive rejuvenation, tapping into the fears of aging and cognitive decline (Penn Medicine).
St. John’s Wort is another popular supplement, especially in Europe, for treating mild to moderate depression. However, its interaction with other medications can be dangerous.
Marketing often emphasizes its “natural” aspect, appealing to those wary of pharmaceuticals, but neglects to mention potential risks (Penn Medicine).
Melatonin is widely used for sleep disorders. While it may help with jet lag or shift work sleep disorder, its effectiveness for general insomnia is debatable. Nonetheless, ads depict peaceful sleep and stress-free mornings, appealing to the sleep-deprived masses (Penn Medicine).
Echinacea is marketed as a remedy for the common cold, yet scientific studies provide inconsistent results. Its appeal lies in its “natural” label and the promise of fewer sick days, capitalized on through catchy slogans and health influencer endorsements (Science-Based Medicine).
In the digital age, mental health apps like Calm and Headspace are all the rage. They offer meditation and mindfulness practices, promising relief from anxiety and depression. These apps have turned into billion-dollar businesses, leveraging emotional success stories and celebrity endorsements to attract users.
Calm, for instance, uses serene visuals and soothing voices from celebrities to promote its premium services. They promise a journey to a stress-free life, often ignoring that meditation alone might not suffice for serious mental health issues.
Similarly, Headspace markets itself with colorful animations and friendly tones, suggesting that a few minutes a day can significantly improve mental health. While these apps can be beneficial, they can also create unrealistic expectations about the ease and speed of mental health improvements.
Online therapy platforms like BetterHelp and Talkspace advertise themselves as convenient solutions for mental health issues. They promise therapy at your fingertips, often at lower costs than traditional therapy. However, the quality of care can vary, and these platforms have faced scrutiny over privacy concerns and the qualifications of their therapists.
Media Madness: Influencing Perceptions
Movies and TV shows often dramatize mental health issues, sometimes inaccurately. For instance, many shows and films dramatize mental health issues, influencing public perception but also perpetuating stereotypes. These portrayals boost viewership and profits but can negatively impact how society views mental illness.
For example, a popular show might depict a character with depression in a way that oversimplifies or glamorizes their struggles, potentially misleading viewers about the reality of living with such conditions. These dramatizations can create a skewed understanding of mental health, leading to stigma and misinformation (Resources To Recover).
Advertisements also play a significant role in shaping perceptions. For instance, beauty and fashion ads often feature models with perfect bodies, setting unrealistic standards. Brands like Victoria’s Secret use digitally altered images to sell their products, which can contribute to self-esteem issues and depression among consumers. These ads suggest that buying their products will lead to happiness and social acceptance, exploiting insecurities for profit (Verywell Mind).
Social media platforms like Instagram, Facebook, and TikTok are designed to keep users engaged as long as possible. They create a cycle of dependency, where users are continuously exposed to content that evokes envy and FOMO (Fear of Missing Out). This can worsen depressive symptoms, as users constantly compare their lives to the seemingly perfect ones they see online.
Instagram, for example, uses algorithms to show users posts that are likely to generate engagement, often highlighting content that triggers strong emotional responses. This can lead to a distorted view of reality, where users believe everyone else is happier and more successful (Verywell Mind) (Child Mind Institute).
Facebook employs similar tactics, with notifications and targeted content that draw users back to the platform. These strategies increase the time users spend online, exposing them to more ads. These ads are highly targeted, using data on users’ emotional states and behaviors to push products that promise to improve mental health. This can exploit users’ vulnerabilities, encouraging them to buy products they might not need (Mudita Products) (Frontiers).
Wrapping Up
The way depression and mental health struggles are exploited across various industries is both complex and concerning. From pharmaceutical companies to the booming self-help market, the drive for profit often overshadows genuine care for individuals’ well-being.
Pharmaceutical Companies: These giants leverage extensive marketing campaigns and selective clinical trials to promote antidepressants. The portrayal of a quick fix through medication often hides the nuanced reality of treating mental health conditions, sometimes leading to dependency or overlooking alternative therapies.
Self-Help Industry: This sector thrives on promises of personal transformation and quick fixes. High-profile figures like James Arthur Ray, Napoleon Hill, Tony Robbins, and Jay Shetty have built empires by selling hope. However, the controversies and tragic outcomes associated with some of their methods highlight the need for a critical approach to self-help solutions.
Wellness Products: The supplement industry taps into the desire for natural remedies but often markets products with dubious scientific backing. The emphasis on naturalness and quick benefits can lead consumers to overlook potential risks and side effects.
Digital Mental Health Solutions: Apps and online therapy platforms offer accessibility and convenience but come with their own set of challenges. While they can be beneficial, they sometimes create unrealistic expectations about mental health improvements and raise concerns about privacy and the quality of care.
Media and Advertising: Movies, TV shows, and advertisements shape our perceptions of mental health, often in ways that are not accurate or helpful. They can perpetuate stereotypes and unrealistic standards, contributing to stigma and misinformed public opinions.
Social Media: Platforms like Instagram and Facebook exploit emotional vulnerabilities through targeted advertising and algorithms designed to keep users engaged. This can exacerbate feelings of inadequacy and depression, creating a cycle of dependency and constant comparison.
As consumers, it’s crucial to approach these products and services with a critical eye. Awareness and education are key to navigating the complex landscape of mental health marketing. By understanding these dynamics, we can make more informed choices and advocate for ethical practices in how mental health is portrayed and treated.
Hey there, business enthusiasts! I’m thrilled to have you back for another exciting dive into the world of business strategy and psychology. This week, we’re peeling back the layers on cognitive biases—those pesky mental shortcuts that can trip us up and lead us astray. Whether you’re making high-stakes investment decisions or just trying to pick a restaurant for dinner, understanding these biases can help you navigate the world with a sharper, clearer mind.
Ready to boost your decision-making skills?
Let’s get started!
+++ Article Roadmap +++
The Giants of Behavioral Economics
Anchoring Bias: The First Number Sticks
Overconfidence Bias: The Dot-Com Bubble
Confirmation Bias: The 2008 Financial Crisis
Authority Bias and Availability Heuristic: Everyday Examples
The Giants of Behavioral Economics
Mitigating Cognitive Biases: Practical Steps
The Giants of Behavioral Economics
Daniel Kahneman and Amos Tversky revolutionized our understanding of decision-making with their work on prospect theory. Kahneman’s book, “Thinking, Fast and Slow,” delves into the dichotomy between our fast, instinctive thinking and our slow, deliberate thinking. This book is a treasure trove of insights into how our minds work, revealing why we often make irrational decisions even when we believe we’re being rational.
Richard Thaler, another giant in the field of behavioral economics, introduced the concept of nudges—subtle changes in the environment that can lead to better decisions. In his book “Nudge,” co-authored with Cass Sunstein, Thaler explores how small interventions can help people make choices that are in their own best interests without restricting their freedom. For example, automatically enrolling employees in retirement plans (with the option to opt-out) significantly increases participation rates compared to requiring them to opt-in.
Let’s now see what are the main cognitive biases we encounter in everyday life and in business.
Anchoring Bias: The First Number Sticks
Have you ever found yourself irrationally fixated on the first price you hear when shopping for something, like a new car or a house? That’s anchoring bias at play. Anchoring bias occurs when individuals rely too heavily on the first piece of information they receive (the “anchor”) when making decisions. This initial information unduly influences their judgment and decision-making, even if subsequent data suggest otherwise. Let’s see an everyday case.
Shopping for a Car
Imagine you walk into a car dealership, and the first car you see is priced at 25,000€. This price sets an anchor in your mind. As you continue shopping, even if you find a similar car priced at 22,000€ the initial 25,000€ tag sticks with you, making the 22,000€ car seem like a better deal than it might actually be. This skewed perception can lead you to make suboptimal financial decisions based on the anchored price, rather than evaluating each option on its own merits.
Anchoring bias doesn’t just affect shopping decisions; it can have significant implications in the business world, particularly in negotiations, pricing strategies, and financial forecasting. For instance, during salary negotiations, the first number put on the table often sets the tone for the entire discussion. If a job candidate anchors high, they might receive a better offer than if tlhey had started lower.
Negotiations: In mergers and acquisitions, the initial offer can set an anchor that skews subsequent negotiations. If a company is initially valued at $1 billion, all further discussions and counteroffers revolve around this anchor, even if a thorough valuation suggests a lower price.
Marketing and Pricing: Retailers often use anchoring to their advantage by displaying the original price next to a discounted price. Seeing that a jacket was originally $200 and is now $120 can make the deal seem more attractive, even if $120 is still a significant amount to spend.
Financial Forecasting: Analysts might anchor their predictions on past data or initial forecasts, potentially leading to inaccurate future projections if they don’t adequately adjust for new information.
Strategies to Counteract Anchoring Bias
Awareness and Education: Being aware of anchoring bias is the first step in mitigating its effects. Educating yourself and your team about this bias can help you recognize when it might be influencing decisions.
Deliberate Delays: Taking a step back and deliberately delaying a decision can provide time to consider other information and perspectives, reducing the impact of the initial anchor.
Use of Objective Data: Relying on objective data and multiple information sources can help counteract the influence of an initial anchor. For example, in salary negotiations, consider industry standards and multiple job offers rather than just the initial number.
Scenario Analysis: Conducting scenario analyses can help you explore different outcomes and perspectives, reducing the weight of the initial anchor in your decision-making process.
Consulting Diverse Opinions: Seeking input from a diverse group of people can provide different viewpoints and reduce the influence of anchoring bias.
Overconfidence Bias: The Dot-Com Bubble
In the late 1990s, the tech industry was booming, and everyone wanted a piece of the pie. Investors were swept up by overconfidence bias, convinced they could predict the market’s next big thing. Initial high valuations (anchoring bias)set unrealistic benchmarks. Companies with flimsy business models, like Pets.com, were suddenly worth millions—until reality hit, and the bubble burst.
The aftermath? Trillions in lost value, numerous bankruptcies, and a harsh lesson in how biases can skew market perceptions.
Strategies to Mitigate Overconfidence Bias
Seek Feedback: Regularly seek feedback from trusted colleagues or mentors to gain an external perspective on your decisions.
Acknowledge Uncertainty: Recognize and accept the inherent uncertainty in predictions and decisions.
Break Down Decisions: Divide large decisions into smaller, manageable parts to evaluate each component more critically.
Historical Analysis: Study past decisions and outcomes to understand where overconfidence may have led to mistakes.
Risk Assessment: Conduct thorough risk assessments and consider worst-case scenarios.
Fast forward to 2008, where confirmation bias played a starring role in the financial meltdown. Financial institutions cherry-picked data to support the stability of mortgage-backed securities, ignoring contrary evidence. When the housing market collapsed, it triggered a global economic crisis. The crash exposed how dangerous it is to ignore dissenting data, showing the devastating impact of collective cognitive biases on a global scale.
Strategies to Counteract Confirmation Bias
Actively Seek Contradictory Evidence: Make a deliberate effort to seek out information that challenges your current beliefs or hypotheses.
Diverse Teams: Encourage diversity in teams to bring in different perspectives and reduce groupthink.
Devil’s Advocate: Assign someone to play the role of devil’s advocate in discussions to question assumptions and highlight potential flaws.
Pre-Mortem Analysis: Conduct pre-mortem analyses to identify potential reasons why a plan might fail.
Blind Review: Use blind review processes to evaluate ideas and data without knowing the source.
Authority Bias and Availability Heuristic: Everyday Examples
Picture this: You’re at a restaurant, struggling to choose between the steak and the salmon. The waiter suggests the chef’s special—suddenly, your decision feels easier. This is authority bias—trusting and following the advice of perceived experts. Or imagine scrolling through social media and seeing ads for a gadget you’ve been eyeing. Suddenly, it seems like everyone is talking about it. This is the availability heuristic in action, where the frequency and recency of exposure make the product seem more popular and desirable than it might actually be.
Strategies to Mitigate Authority Bias and Availability Heuristic
Question Authority: Always question and verify the advice or recommendations from authority figures, no matter how credible they seem.
Research Independently: Conduct your own research to gather multiple sources of information.
Diversify Information Sources: Rely on a diverse range of information sources to get a more balanced view.
Reflect on Decisions: Take time to reflect on your decisions and the factors that influenced them.
Awareness Training: Engage in training sessions that highlight common biases and their impact.
generated by me with DEll-E
Understanding cognitive biases isn’t just a theoretical exercise—it’s a practical tool that can drastically improve your decision-making in business and life. By being aware of these biases, fostering diverse perspectives, using structured decision-making processes, leveraging technology, and conducting pre-mortem analyses, you can mitigate their impact and make more rational, informed decisions.
For those eager to delve deeper into this topic, “Thinking, Fast and Slow” by Daniel Kahneman, “Nudge” by Richard Thaler and Cass Sunstein, and “Predictably Irrational” by Dan Arielycome highly recommended.
These books offer profound insights and practical advice on navigating the complex landscape of human decision-making.
Thanks for joining me in this exploration of cognitive biases and the market. Stay tuned for more insights and hacks to boost your business acumen in next issue of “Business Hacks & Theories”!
Today, every click, swipe, and search you make is tracked, analyzed, and monetized. But at what cost? In this edition of “Business Hacks & Theories,” we dive into the murky waters of “surveillance capitalism”—a phenomenon that turns your privacy into a product and your daily life into a goldmine of data
The term “surveillance capitalism” might sound like something out of a dystopian novel, but it’s very real. Coined by Shoshana Zuboff in her 2019 book “The Age of Surveillance Capitalism,” it describes an economic system built on the extraction and commodification of personal data.
Definition and Mechanisms
Surveillance capitalism hinges on the collection of detailed behavioral data, which is then transformed into predictive products. These products are sold in behavioral futures markets for profit. What sets this apart is that the data isn’t just used to improve services—it’s used to predict and influence your behavior.
This all started in the early 2000s when Google rolled out AdWords, using search data to target ads. The idea caught on quickly. Facebook, Amazon, and others saw the potential to turn personal data into big bucks. Today, your online activity fuels a massive industry dedicated to knowing what you’ll do next.
While it might seem harmless—after all, who doesn’t want personalized ads?—the implications are far-reaching. Companies can predict and manipulate behavior at an unprecedented scale, raising serious concerns about privacy, security, and ethics.
IoT and the Ubiquity of Data Collection
Enter the Internet of Things (IoT), a network of smart devices that monitor everything from your sleep patterns to your driving habits. By 2025, we’re looking at over 75 billion IoT devices worldwide, generating a staggering amount of data.
Some use cases—>
Amazon: Take Alexa, Amazon’s voice assistant. It records everything from your music tastes to your shopping habits. In 2020, Amazon made over $21 billion from cloud services and ads, thanks largely to data from IoT devices.
Google: Google’s Nest thermostats don’t just regulate your home’s temperature; they track your energy use and even your movements at home.
Data Extortion Strategies
Big tech has perfected the art of data extortion. They lure you in with free services, then harvest your data like farmers in a digital field. Most of the time, users have no clue what they’re really giving away. Let’s dig deeper into how these companies get their hands on your data and turn it into gold.
◼️ Freemium Models
One of the most common strategies is the freemium model. Apps and services are offered for free with basic functionalities, but to unlock premium features, users often need to provide more personal information or agree to broader data sharing terms.
◼️Hidden Data Collection
Many apps collect data that is not necessary for their core functionality. For example, a simple flashlight app might request access to your contacts, location, and even your microphone. This excessive data collection is often hidden in lengthy and complex terms of service agreements that users rarely read.
◼️Social Engineering
Tech companies also use social engineering tactics to encourage data sharing. Features like social media quizzes, “login with Facebook” options, and friend-finding services entice users to share more personal information. These features make it convenient to connect with others but also serve as a data-gathering tool.
◼️Continuous Monitoring
Devices and services are designed to collect data continuously. For instance, smart home devices like Amazon Echo and Google Home are always listening for their wake word, but they also capture a lot of incidental audio data. Fitness trackers and smartwatches constantly monitor your health metrics, which can be sold to third parties, including insurance companies.
Data Integration
Companies like Google and Facebook don’t just collect data from their own platforms; they integrate data from multiple sources. By combining data from social media, search history, email, and third-party apps, they create comprehensive profiles that are incredibly valuable for targeted advertising.
Consent Through Design
Often, consent is obtained through design choices that nudge users towards agreeing to data sharing. For example, privacy settings might be buried deep in menus, or the default options might favor data collection. These design choices make it easy for users to unknowingly consent to extensive data sharing.
Some cases—>
Facebook: Remember the Cambridge Analytica scandal? Millions of users had their data mined without explicit consent, all to sway political elections. This fiasco cost Facebook a $5 billion fine from the FTC in 2019.
Free Apps: Ever wonder why some apps are free? They collect your contacts, location, and activity data, then sell it to the highest bidder. For example, many weather apps have been found to share location data with advertisers.
Loyalty Programs: Retailers use loyalty programs to collect purchase data. Every time you scan your loyalty card, companies gather information on your buying habits, which they use to target you with personalized ads and promotions.
These strategies highlight how pervasive and sophisticated data collection methods have become. While users enjoy the benefits of personalized services and free apps, they often pay with their privacy, feeding a vast ecosystem of data exploitation that drives the profits of tech giants.
Surveillance capitalism doesn’t just invade your privacy; it can also deepen social inequalities. Algorithms used for loan approvals or job screenings can perpetuate biases, leaving some people unfairly excluded.
Predatory Lending: Some financial companies use your data to find and exploit vulnerable individuals with high-interest loans.
Employment Screening: Recruitment algorithms might filter out candidates based on biased data, perpetuating existing inequalities.
Regulation and Resistance
Regulations like the GDPR in Europe aim to curb the excesses of data exploitation by enforcing strict consent and transparency rules. But tech giants, with their global reach, often find ways to sidestep these regulations.
Some examples –>
GDPR: Since 2018, GDPR has slapped several companies with hefty fines for data breaches, including a €50 million penalty for Google in 2019.
California Consumer Privacy Act (CCPA): This 2020 law gives Californians rights similar to GDPR, promoting greater data transparency and control.
Astronomical Profits
The data economy has exploded in recent years. Giants like Google, Facebook, and Amazon have built empires on the back of data collection and analysis.
Google: In 2021, Alphabet (Google’s parent company) pulled in over $257 billion, mostly from digital advertising.
Facebook: Meta Platforms (formerly Facebook) raked in $117.9 billion in 2021, primarily through personalized ads.
Amazon: Amazon Web Services, fueled by user data, contributed over $62 billion to the company’s annual revenue in 2021.
Techniques to Avoid Being Tracked
To protect your privacy, you need to be proactive. Here are some tools and strategies to help you stay under the radar.
Strategies and Tools:
VPN (Virtual Private Network): Encrypts your internet traffic, masking your location and activity.
Secure Browsers: Use privacy-focused browsers like Firefox or Brave, which block trackers.
Ad Blockers: Extensions like uBlock Origin can block ads and trackers.
Digital Awareness: Be mindful of the services you use and limit your exposure on social media.
Surveillance capitalism makes us rethink the value of privacy in the digital age. As technology evolves, we must balance innovation with individual rights, ensuring progress doesn’t come at the cost of dignity and freedom.
Surveillance capitalism represents a seismic shift in how economic value is created and extracted. By monetizing personal data, companies have unlocked new revenue streams, but at substantial ethical and societal costs. The commodification of personal information has sparked a crucial debate about the limits of technological advancement and the need for robust regulatory frameworks to protect individual privacy.
As consumers, we need to be vigilant and proactive about protecting our data. Understanding the mechanisms and implications of surveillance capitalism can empower us to make informed choices about the services we use and the information we share. At the same time, policymakers must enact and enforce regulations that safeguard privacy and promote transparency in data practices.
The future of surveillance capitalism hinges on our collective ability to navigate the complexities of the digital economy while upholding fairness, equity, and respect for individual rights. By fostering a culture of digital literacy and ethical responsibility, we can harness technology’s benefits without compromising our fundamental values.
As Digital Marketer and owner of Kredo Marketing keting, I’m knee-deep in data-driven strategies. But I draw a hard line at unethical data use. Our agency transparency, consent, and ethical practices in every campaign. Data should enhance user experience and add value, not exploit or manipulate. It’s high time the marketing industry adopts ethical standards that respect privacy and build trust.
Welcome back to “Business Hacks & Theories,” my newsletter where we dive into the tactics and strategies that shape our digital world. This is my fourth article, and today, we’re tackling a topic that’s as important as it is unsettling:
the manipulative marketing tactics of far-right extremist groups.
The Sylt Incident
In the picturesque setting of Sylt, a recent event has cast a dark shadow over this idyllic German island. A viral video emerged, showing young neo-Nazis singing racist songs and performing Nazi salutes at a club.
This incident sparked widespread outrage on social media and revealed how these groups harness modern digital tools to spread their pernicious messages. But this isn’t an isolated case; it’s part of a broader strategy that uses digital marketing techniques for harmful purposes.
A Deeper Dive into Digital Marketing Tactics
Far-right extremist groups across Europe have become adept at using digital marketing strategies to disseminate their ideologies and recruit new members. Their methods are disturbingly effective, blending traditional marketing tactics with modern digital tools to maximize their reach and impact.
Social Media: The Echo Chamber Effect
Social media platforms are the frontline in the battle for hearts and minds. Far-right groups create engaging content designed to be shared widely, often leveraging the power of hashtags to increase their visibility. The video from Sylt is a prime example of this tactic, using the virality of social media to spread their message far and wide. By tapping into trending topics and inserting their content into broader conversations, these groups ensure that their messages reach beyond their immediate followers.
Example:
Hashtag Campaigns: Far-right groups in Europe use hashtags like #GreatReplacement, which refer to conspiracy theories about demographic changes, to connect their content with broader conversations about race and identity, amplifying their reach.
Content Creation: The Power of Memes and Videos
Memes and videos are potent tools in the far-right’s arsenal. They distill complex ideologies into easily digestible and shareable content. Memes, in particular, use humor or irony to mask the underlying hateful messages, making them more palatable and likely to be shared by unsuspecting users.
Example:
Racist Memes: In Italy, the far-right group CasaPound has used memes to spread their message subtly. A meme might show a peaceful Italian village juxtaposed with misleading crime statistics about immigrants, subtly promoting xenophobic ideas.
Influencer Marketing: Leveraging Popularity
Influencer marketing is another tactic borrowed from the corporate world. Far-right groups collaborate with online personalities who, knowingly or unknowingly, spread their ideology. These influencers might share content that aligns with far-right views, thus exposing their large followings to extremist ideas.
Some prominent far-right YouTubers in Europe include:
Paul Joseph Watson: Known for his anti-immigration and anti-Islam content.
Lauren Southern: A Canadian alt-right activist with a strong presence in Europe.
Martin Sellner: Leader of Generation Identity, known for his anti-immigration stance.
X Influencers:
Katie Hopkins: A British media personality known for her provocative and far-right tweets.
Tommy Robinson: Founder of the English Defence League, frequently tweeting anti-Islam content.
Gavin McInnes: Co-founder of Vice Media and founder of the Proud Boys, known for his far-right views and controversial statements.
Algorithm Manipulation: Gaming the System
Far-right groups are skilled at manipulating social media algorithms to ensure their content gets maximum exposure. By coordinating likes, shares, and comments, they can trick algorithms into promoting their content more widely. This technique, known as algorithm gaming, boosts the visibility of their posts, reaching users who might not actively seek out such content.
Coordinated Engagement: Members of far-right groups might like and comment on each other’s posts in a coordinated effort to boost engagement metrics, tricking algorithms into promoting their content more broadly.
Exploiting Online Communities: Targeting the Vulnerable
Online communities, such as forums, chat rooms, and gaming platforms, are fertile ground for recruitment. Far-right groups infiltrate these spaces, targeting young, impressionable individuals who might feel alienated or disenfranchised. By presenting their ideology as a solution to these feelings of alienation, they can attract recruits looking for a sense of belonging.
Gaming Platforms: Far-right groups have been known to infiltrate gaming communities, where they can engage with young people and subtly introduce extremist ideologies.
Crowdfunding and Merchandising: Financing Hate
Crowdfunding platforms are used by far-right groups to raise money for their activities. They appeal to supporters for donations to fund events, legal defenses, or propaganda campaigns. Additionally, they sell merchandise like clothing and accessories emblazoned with far-right symbols, turning supporters into walking advertisements for their cause.
Merchandise Sales: Far-right groups in France, like Génération Identitaire, sell items such as T-shirts, hats, and stickers with their symbols, spreading their message and generating funds.
Conclusion
The tactics used by far-right extremist groups to market their ideology are both sophisticated and deeply manipulative. By understanding these methods, we can better develop strategies to counteract their influence and protect vulnerable individuals from being drawn into these dangerous movements. Increased awareness and proactive measures from both users and platform providers are essential in combating the spread of extremist content online.
By providing real examples of how far-right groups use digital marketing tactics, I hope to shed light on the seriousness of this issue and encourage proactive measures to counteract their influence.
Hey watchful rebels—this one’s about the soft war being waged on your kids.
Gamification sounds innocent. It sounds like fun. Points, rewards, levels, badges—mechanics borrowed from video games and applied to apps, education tools, even toothbrushes.
But behind the pixelated sparkle lies a darker intent: to condition behavior, maximize retention, and boost purchases—especially among the most vulnerable: kids aged 6 to 12.
The Hidden Machinery of Kid-Centric Gamification
Children at this age are neurologically still developing critical faculties like impulse control, delayed gratification, and risk assessment. They’re wired for exploration, but not for defense.
Which makes them the ideal targets for systems built to exploit behavioral loops.
Games aimed at kids now borrow heavily from slot machine logic: variable rewards, countdown timers, streak mechanics, loot boxes.
These aren’t just playful features—they’re addictive architectures.
Just like casinos, they’re designed to create dopamine hits that reinforce repetition.
One study found that nearly 90% of top-grossing children’s appsinclude in-app purchases, ads, or both—many introduced via gamified experiences that mask their commercial nature.
Think: “Unlock this magical pet with gems!” (The gems cost real money.)
Or: “Watch this ad to earn an extra life!” (Now you’re a captive audience.)
Behavioral Economics Gets Distorted
Let’s be clear: marketing to children is not new.
But gamification distorts principles from behavioral economics in more dangerous ways:
Loss aversion is weaponized through disappearing streaks or vanishing rewards.
The endowment effect is artificially induced—kids become emotionally attached to digital items they haven’t earned but can’t bear to lose.
Anchoring bias is used by offering high-priced bundles first, so subsequent offers feel like a bargain.
Operant conditioning is exploited: rewards are no longer tied to effort or creativity but to monetized taps and upgrades.
This isn’t engagement. It’s conditioning.
And it happens in apps disguised as learning tools, entertainment, or “harmless fun.”
The Ethical Black Hole
Despite their good intentions, both frameworks often fall short when faced with fast-moving, gamified ecosystems. Global coordination and platform-level responsibility are urgently needed.
Regulations lag behind design. In the U.S., COPPA (Children’s Online Privacy Protection Act) is supposed to protect under-13 users, but many developers find loopholes by making opt-ins deceptively easy, or by claiming the app isn’t “directed at children.” In Europe, the equivalent framework is the General Data Protection Regulation (GDPR), which includes specific provisions for children’s data under Article 8. However, enforcement remains patchy, and many apps either bypass age verification or fail to clearly explain data use and monetization—especially when gamification masks commercial intent.
Meanwhile, parental consent becomes meaningless in ecosystems where kids can watch unskippable ads, pressure their guardians for in-app purchases, or obsess over virtual currencies and social status within the game.
In the world of gamified kids’ products, play becomes commerce. And curiosity becomes monetization.
What Needs to Change—and Fast
Stricter regulation of gamified systems for under-13 audiences.
Mandatory transparency labels for reward-based mechanics and in-app monetization.
Design ethics oversight—game designers must be trained not just in engagement metrics, but in child psychology.
Digital literacy education for parents and children alike.
And maybe most importantly:
We need to stop calling this play.
Because when fun is just a funnel to revenue, it’s not a game.
It’s manipulation.
Don’t Let Fun Be the Trojan Horse
If you’re a parent, a designer, a teacher, a marketer—ask yourself: Who benefits from this game, and at what cost?
Kids deserve worlds that inspire, not exploit. Learning that empowers, not entraps. If we keep letting them level up in systems that only reward spending, we’re not teaching them growth—we’re teaching them addiction.
Until next time,
stay alert.
Alex
🔥 Want ethical strategies that build trust—not dependency?
At Kredo Marketing, we design with responsibility. Let’s rethink how we engage the next generation.
Google is now testing ads injected directly into AI chatbot responses.
You ask a question. You get an answer.
But that answer might be paid for. Invisible bias wrapped in machine efficiency. If you’re not alarmed yet, you should be.
From Disruption to Domination: A Brief History of Google’s Advertising Empire
Back in 2000, Google introduced a revolutionary model: AdWords.
Unlike banner ads that screamed for attention, these were text-based, relevance-driven, and tied to search queries.
It was brilliant. Subtle. Useful.
The user got what they needed, and advertisers paid to be seen. Google made billions.
Then came AdSense (2003), allowing publishers to monetize their sites with Google’s targeting power. Next, Google bought DoubleClick in 2007, expanding its grip on display advertising. By 2010, Google wasn’t just a search engine — it was the operating system of the internet economy.
Then came YouTube. Then mobile. Then the integration of ads into Gmail, Maps, the Play Store, Chrome. And now, in 2025, chatbots.
What began as a clever monetization strategy has evolved into a system of total dominance.
In 2023, Google controlled over 39% of global digital ad spending.
That’s not competition. That’s empire.
Why Is Google Doing This?
The answer is simple and devastating: to protect its cash cow.
Search is evolving. Users are spending less time on traditional query pages and more time inside conversational interfaces — like ChatGPT, Gemini, Claude. These platforms threaten Google’s ad revenue because they sidestep the classic search-ad-click loop.
So what does Google do? It brings the ads inside the conversation. Embeds them in the AI’s voice. Blurs the line between organic and paid until it no longer exists.
It’s not just a business move.
It’s a cultural betrayal.
Is It Ethical?
No. It’s not.
Advertising in itself isn’t evil. But embedding ads within tools that users perceive as neutral, objective, or informative — without clear disclosure — is manipulative.
Chatbots don’t just give you data. They shape how you think. How you decide. If commercial interests are baked into the machine, then your worldview is being auctioned in real time.
This is not about relevant suggestions.
It’s about epistemological control.
And the scariest part? You might not even realize it’s happening.
Where Should We Draw the Line?
The internet has never been free of commerce. But we need red lines:
Full transparency: Paid content in AI must be clearly labeled.
User control: Opt-out options must be standard, not hidden.
No ads in critical queries: Health, legal, financial, political — these areas should be sacred.
No personalization without permission: Targeted ads based on private chats? That’s surveillance, not service.
If AI is the next interface of knowledge, we must defend it from becoming a billboard.
What Can Be Done?
This isn’t just a fight for lawmakers. It’s a fight for all of us.
Push for regulation: Demand AI-specific ad legislation.
Support open-source alternatives: Like Mozilla’s projects or non-profit LLMs.
Call it out: Public pressure works. Remember when WhatsApp tried to share data with Facebook?
Use blockers: AI interfaces must be audited and filtered, just like browsers.
And maybe most importantly:
Talk about it. Share the danger. Expose the subtle creep. Make the manipulation visible.
Wake Up Before the Chat Sells You Out
This is not just another product update. It’s a philosophical pivot. From helping you navigate the world to selling you a version of it tailored to someone else’s profit.
When knowledge becomes an ad space, the truth itself becomes negotiable.
So next time you ask a chatbot something important, remember: