5 Questions to Ask About Your Organization’s Strategy

If you have a strong understanding of how culture affects your organization’s strategy then you have better ideas of creating strategies that are truly transformative for your organization. Having said that, most organizations don’t make the time to strategize about strategy development processes and thus are not fully aware of the intended and unintended affects of their pursuits. The three main reasons for this lack of awareness are:

  1. The fallacy that strategy should always be top-down
  2. The lack of a holistic approach to strategy development and feedback
  3. The half-baked idea that strategy can only be created by a few people

An organization’s overall strategy is a combination of policy and plan of action that is intended to improve making, buying, or selling of goods and/or services for the customer. Thus, it becomes imperative for organizations to keep customer at the center of what they do and create customer experiences that make their lives easier.

If you want strategy to be something that is shelf-ware that looks pretty on an executives’ file cabinet and it is cool to only talk about it then don’t read ahead. For those if you who want strategy to be more than just an exercise then I would invite you to ask the following questions about strategy and strategy development processes within your own organization:

Strategic Perspectives on Strategy:



In the Future

1. Who is incentivized at the executive level to create strategy? Who should be incentivized at the executive level to create strategy?
2. What governance structures are in place for transforming how strategy is created? What governance structures should be in place for transforming how strategy is created?
3. Where is technology used to help strategy? Where should technology be used to help strategy?
4. When and how often strategic objectives are communicated? When and how often strategic objectives should be communicated?
5. Why holistic strategy development processes are critical to achieving strategic objectives? Why holistic strategy development processes should be critical to achieving strategic objectives?

Tactical Perspectives on Strategy:



In the Future

1. Who is incentivized at the middle management level to give feedback on strategy? Who should be incentivized at the middle management level to give feedback on strategy?
2. What business units, functional areas and teams are included to develop strategy? What business units, functional areas and teams should be included to create strategy?
3. Where technology hinders in strategy development processes? Where technology might hinder in strategy development processes?
4. When is the start and end of meeting strategic objectives communicated? When should the start and end of meeting strategic objectives communicated?
5. Why strategy development processes are critical to achieving tactical objectives? Why strategy development processes should be critical to achieving tactical objectives?

Operational Perspectives on Strategy:



In the Future

1. Who sees strategy development  processes as an obstacle? Who might see strategy development processes as an obstacle?
2. What business processes provide views on the organization’s actual vs. perceived strategy? What business processes should provide views on the organization’s actual vs. perceived strategy?
3. Where is technology part of your understanding the organization’s strategy? Where should technology be a part of understanding the organization’s strategy?
4. When were you informed about the strategic objectives and strategy development processes? When should you have been informed about the strategic objectives and strategy development processes?
5. Why strategic objectives are critical to achieving your daily tasks? Why strategic objectives should be critical to achieving your daily tasks?

To be clear, strategy and strategy development affects everyone inside and outside your organization which includes executives, middle management, front lines employees as well as the customers you are trying to acquire. Thus, your organization’s strategy development processes should be robust enough that they take long-term holistic views but also flexible enough to cater for bumps and take advantage of technological advancement.

5 Questions to About Your Organization's Strategy



2 Takeaways from the 2018 Spring Meetings by the International Monetary Fund (IMF) and the World Bank

Every year the IMF and the World Bank hold a conference-style event that is referred to as the Spring Meetings. These Spring Meetings bring together central bankers, ministers of finance and development, private sector executives and academics to discuss global issues such as global economy, international development and the world’s financial markets.

This year I had the opportunity to attend the 2018 Spring Meetings where discussions were held about threats and opportunities of technological changes as it affects global economies and policies. Here are 2 takeaways from the 2018 Spring Meetings focused on technology and innovation including some of my related articles:

    •  Industrialization Paradigms
      • Typical Industrialization: Agriculture → Manufacturing → Services
      • Current Industrialization: Agriculture → Services
    • Impacts of Technology
      • Technological Changes → Job loss → Re-skill → New Jobs
      • Some jobs will never be recovered
      • Flow of technology and expertise doesn’t flow easily across countries
      • Even within countries technological impacts are uneven causing inequality
      • A good balance between data privacy and business models is needed that benefits societies at a larger scale
      • Depending upon where innovation (internal or external) to the organizations is can impact society at different levels
      • A good balance of foundations and advance education is needed
      • Specialized knowledge can negatively impact holistic societal impacts
    • Artificial Intelligence (AI)
      • Dystopian Views: AI will take over most human activities and would rule over humans
      • Middle Ground Views: AI will augment and enhance human activities but never replace humans
      • Utopian Views: AI will take over most human activities that would free up time for humans to do other things
    • The Brave New World of Data
      • Data quality issues are borderless
      • Standard data definitions of economic data has to be agreed upon and used
      • Data is being used to build economic policies
      • Data is being used to create multinational economic blocs
      • Data is being used to assess the humming of the global economy
      • Data Standardization and Harmonization àData Transparency àData Accountability
    • For economic prosperity, no organization, country, region is an island in of itself
    • Bridges need to be created across, public, private, academic, non-profit and shareholders
    • Regulations are slow to adopt to technological advancements and can be too heavy-handed or light-touch if not properly understood by policy makers
    • Grassroots changes are affecting how governments function and adapt
    • Technology and innovation should have executive level consideration across all branches of government and not just a ministry or a few people

Bonus: IMF’s Innovation Lab (iLab)

IMF has created the iLab whose goal seems to be to look at how technology and innovation is affecting the global economy and economic policies in various countries.

Related Articles:



World Map Data

What Questions Do You Have For Mark Zuckerberg?

Mark Zuckerberg will be testifying to a joint hearing before the Senate Judiciary and Senate Commerce, Science and Transportation Committees on April 10, 2018 and to the House Energy and Commerce Committee hearing on April 11, 2018 in regards to Facebook’s use and protection of user data particularly pertaining to the 2016 U.S. Presidential Elections. Prior to the hearings, Mark Zuckerberg’s prepared statement has been released to the public.

Following is the prepared written statement of Mark Zuckerberg, along with a list of my own questions at the end:


April 11, 2018

Testimony of Mark Zuckerberg Chairman and Chief Executive Officer, Facebook


Chairman Walden, Ranking Member Pallone, and Members of the Committee,

We face a number of important issues around privacy, safety, and democracy, and you will rightfully have some hard questions for me to answer. Before I talk about the steps we’re taking to address them, I want to talk about how we got here.

Facebook is an idealistic and optimistic company. For most of our existence, we focused on all the good that connecting people can bring. As Facebook has grown, people everywhere have gotten a powerful new tool to stay connected to the people they love, make their voices heard, and build communities and businesses. Just recently, we’ve seen the #metoo movement and the March for Our Lives, organized, at least in part, on Facebook. After Hurricane Harvey, people raised more than $20 million for relief. And more than 70 million small businesses now use Facebook to grow and create jobs.

But it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well. That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.

So now we have to go through every part of our relationship with people and make sure we’re taking a broad enough view of our responsibility.

It’s not enough to just connect people, we have to make sure those connections are positive. It’s not enough to just give people a voice, we have to make sure people aren’t using it to hurt people or spread misinformation. It’s not enough to give people control of their information, we have to make sure developers they’ve given it to are protecting it too. Across the board, we have a responsibility to not just build tools, but to make sure those tools are used for good.

It will take some time to work through all of the changes we need to make, but I’m committed to getting it right.

That includes improving the way we protect people’s information and safeguard elections around the world. Here are a few key things we’re doing:


Over the past few weeks, we’ve been working to understand exactly what happened with Cambridge Analytica and taking steps to make sure this doesn’t happen again. We took important actions to prevent this from happening again today four years ago, but we also made mistakes, there’s more to do, and we need to step up and do it.

What Happened

In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends’ birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them.

In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who agreed to share some of their Facebook information as well as some information from their friends whose privacy settings allowed it. Given the way our platform worked at the time this meant Kogan was able to access some information about tens of millions of their friends.

In 2014, to prevent abusive apps, we announced that we were changing the entire platform to dramatically limit the Facebook information apps could access. Most importantly, apps like Kogan’s could no longer ask for information about a person’s friends unless their friends had also authorized the app. We also required developers to get approval from Facebook before they could request any data beyond a user’s public profile, friend list, and email address. These actions would prevent any app like Kogan’s from being able to access as much Facebook data today.

In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people’s consent, so we immediately banned Kogan’s app from our platform, and demanded that Kogan and other entities he gave the data to, including Cambridge Analytica, formally certify that they had deleted all improperly acquired data — which they ultimately did.

Last month, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to investigate this. We’re also working with the U.K. Information Commissioner’s Office, which has jurisdiction over Cambridge Analytica, as it completes its investigation into what happened.

What We Are Doing

We have a responsibility to make sure what happened with Kogan and Cambridge Analytica doesn’t happen again. Here are some of the steps we’re taking:

  • Safeguarding our platform. We need to make sure that developers like Kogan who got access to a lot of information in the past can’t get access to as much information going forward.

o We made some big changes to the Facebook platform in 2014 to dramatically restrict the amount of data that developers can access and to proactively review the apps on our platform. This makes it so a developer today can’t do what Kogan did years ago.

o But there’s more we can do here to limit the information developers can access and put more safeguards in place to prevent abuse.

  • We’re removing developers’ access to your data if you haven’t used their app in three months.
  • We’re reducing the data you give an app when you approve it to only your name, profile photo, and email address. That’s a lot less than apps can get on any other major app platform.
  • We’re requiring developers to not only get approval but also to sign a contract that imposes strict requirements in order to ask anyone for access to their posts or other private data.
  • We’re restricting more APIs like groups and events. You should be able to sign into apps and share your public information easily, but anything that might also share other people’s information — like other posts in groups you’re in or other people going to events you’re going to — will be much more restricted.
  • Two weeks ago, we found out that a feature that lets you look someone up by their phone number and email was abused. This feature is useful in cases where people have the same name, but it was abused to link people’s public Facebook information to a phone number they already had. When we found out about the abuse, we shut this feature down.
  • Investigating other apps. We’re in the process of investigating every app that had access to a large amount of information before we locked down our platform in 2014. If we detect suspicious activity, we’ll do a full forensic audit. And if we find that someone is improperly using data, we’ll ban them and tell everyone affected.
  • Building better controls. Finally, we’re making it easier to understand which apps you’ve allowed to access your data. This week we started showing everyone a list of the apps you’ve used and an easy way to revoke their permissions to your data. You can already do this in your privacy settings, but we’re going to put it at the top of News Feed to make sure everyone sees it. And we also told everyone whose Facebook information may have been shared with Cambridge Analytica.

Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.


Facebook’s mission is about giving people a voice and bringing people closer together. Those are deeply democratic values and we’re proud of them. I don’t want anyone to use our tools to undermine democracy. That’s not what we stand for.

We were too slow to spot and respond to Russian interference, and we’re working hard to get better. Our sophistication in handling these threats is growing and improving quickly. We will continue working with the government to understand the full extent of Russian interference, and we will do our part not only to ensure the integrity of free and fair elections around the world, but also to give everyone a voice and to be a force for good in democracy everywhere.

What Happened

Elections have always been especially sensitive times for our security team, and the 2016 U.S. presidential election was no exception.

Our security team has been aware of traditional Russian cyber threats — like hacking and malware — for years. Leading up to Election Day in November 2016, we detected and dealt with several threats with ties to Russia. This included activity by a group called APT28, that the U.S. government has publicly linked to Russian military intelligence services.

But while our primary focus was on traditional threats, we also saw some new behavior in the summer of 2016 when APT28-related accounts, under the banner of DC Leaks, created fake personas that were used to seed stolen information to journalists. We shut these accounts down for violating our policies.

After the election, we continued to investigate and learn more about these new threats. What we found was that bad actors had used coordinated networks of fake accounts to interfere in the election: promoting or attacking specific candidates and causes, creating distrust in political institutions, or simply spreading confusion. Some of these bad actors also used our ads tools.

We also learned about a disinformation campaign run by the Internet Research Agency (IRA) — a Russian agency that has repeatedly acted deceptively and tried to manipulate people in the US, Europe, and Russia. We found about 470 accounts and pages linked to the IRA, which generated around 80,000 Facebook posts over about a two-year period.

Our best estimate is that approximately 126 million people may have been served content from a Facebook Page associated with the IRA at some point during that period. On Instagram, where our data on reach is not as complete, we found about 120,000 pieces of content, and estimate that an additional 20 million people were likely served it.

Over the same period, the IRA also spent approximately $100,000 on more than 3,000 ads on 4

Facebook and Instagram, which were seen by an estimated 11 million people in the United States. We shut down these IRA accounts in August 2017.

What We Are Doing

There’s no question that we should have spotted Russian interference earlier, and we’re working hard to make sure it doesn’t happen again. Our actions include:

  • Building new technology to prevent abuse. Since 2016, we have improved our techniques to prevent nation states from interfering in foreign elections, and we’ve built more advanced AI tools to remove fake accounts more generally. There have been a number of important elections since then where these new tools have been successfully deployed. For example:

o In France, leading up to the presidential election in 2017, we found and took down 30,000 fake accounts.

o In Germany, before the 2017 elections, we worked directly with the election commission to learn from them about the threats they saw and to share information.

o In the U.S. Senate Alabama special election last year, we deployed new AI tools that proactively detected and removed fake accounts from Macedonia trying to spread misinformation.

o We have disabled thousands of accounts tied to organized, financially motivated fake news spammers. These investigations have been used to improve our automated systems that find fake accounts.

o Last week, we took down more than 270 additional pages and accounts operated by the IRA and used to target people in Russia and Russian speakers in countries like Azerbaijan, Uzbekistan and Ukraine. Some of the pages we removed belong to Russian news organizations that we determined were controlled by the IRA.

  • Significantly increasing our investment in security. We now have about 15,000 people working on security and content review. We’ll have more than 20,000 by the end of this year.

o I’ve directed our teams to invest so much in security — on top of the other investments we’re making — that it will significantly impact our profitability going forward. But I want to be clear about what our priority is: protecting our community is more important than maximizing our profits.

  • Strengthening our advertising policies. We know some Members of Congress are exploring ways to increase transparency around political or issue advertising, and we’re happy to keep working with Congress on that. But we aren’t waiting for legislation to act.

o From now on, every advertiser who wants to run political or issue ads will need to be authorized. To get authorized, advertisers will need to confirm their identity and location. Any advertiser who doesn’t pass will be prohibited from running political or issue ads. We will also label them and advertisers will have to show you who paid for them. We’re starting this in the U.S. and expanding to the rest of the world in the coming months.

o For even greater political ads transparency, we have also built a tool that lets anyone see all of the ads a page is running. We’re testing this in Canada now and we’ll launch it globally this summer. We’re also creating a searchable archive of past political ads.

o We will also require people who manage large pages to be verified as well. This will make it much harder for people to run pages using fake accounts, or to grow virally and spread misinformation or divisive content that way.

o In order to require verification for all of these pages and advertisers, we will hire thousands of more people. We’re committed to getting this done in time for the critical months before the 2018 elections in the U.S. as well as elections in Mexico, Brazil, India, Pakistan and elsewhere in the next year.

o These steps by themselves won’t stop all people trying to game the system. But they will make it a lot harder for anyone to do what the Russians did during the 2016 election and use fake accounts and pages to run ads. Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act. This will help raise the bar for all political advertising online.

  • Sharing information. We’ve been working with other technology companies to share information about threats, and we’re also cooperating with the U.S. and foreign governments on election integrity.

At the same time, it’s also important not to lose sight of the more straightforward and larger ways Facebook plays a role in elections.

In 2016, people had billions of interactions and open discussions on Facebook that may never have happened offline. Candidates had direct channels to communicate with tens of millions of citizens. Campaigns spent tens of millions of dollars organizing and advertising online to get their messages out further. And we organized “get out the vote” efforts that helped more than 2 million people register to vote who might not have voted otherwise.

Security — including around elections — isn’t a problem you ever fully solve. Organizations like the IRA are sophisticated adversaries who are constantly evolving, but we’ll keep improving our techniques to stay ahead. And we’ll also keep building tools to help more people make their voices heard in the democratic process.


My top priority has always been our social mission of connecting people, building community and bringing the world closer together. Advertisers and developers will never take priority over that as long as I’m running Facebook.

I started Facebook when I was in college. We’ve come a long way since then. We now serve more than 2 billion people around the world, and every day, people use our services to stay connected with the people that matter to them most. I believe deeply in what we’re doing. And when we address these challenges, I know we’ll look back and view helping people connect and giving more people a voice as a positive force in the world.

I realize the issues we’re talking about today aren’t just issues for Facebook and our community — they’re challenges for all of us as Americans. Thank you for having me here today, and I’m ready to take your questions.

The committee members will be asking questions on behalf of the Facebook users in general and the American public in particular. Along the same lines, I have compiled the following questions that might help:

  1. What do you define as fake news?
  2. How would you verify the fakeness of news?
  3. What processes and tools you have in place that makes every employee and business conscious of their responsibility for safeguarding Facebook user data?
  4. Which other social media outlets are also responsible for spreading of fake news?
  5. It will take some time to make changes at Facebook. What is the timeframe and what happens during this transition period?
  6. The issue of influencing democracy goes beyond the scope of Facebook. How is Facebook going to work with governments, United Nations (UN) and other international entities? What data are you going to be sharing with these entities?
  7. How will Facebook strike a balance between free speech and censorship (intentional and unintentional)?
  8. What background investigations would you be doing on businesses that are on Facebook?
  9. It seems that most actions listed by Facebook are reactive in nature, how are you proactively looks at threats at all levels from a broader prospective?
  10. How much blame should the Chief Information Security Officer (CISO) take for Information Security?
  11. How much blame should be pointed towards the culture and politics at Facebook where it seems pursuit of revenue was preferred over national interests?
  12. Is it an internal or external team doing the forensic auditing? Will the findings of this audit be shared with the public?
  13. What do you plan to do if the forensic audits reveal lapses of judgment at Facebook?
  14. How much do you think is the personal responsibility of Facebook users biases when it comes to sharing fake news on purpose or by accident? What would happen to these users?
  15. As you utilize Artificial Intelligence, these systems can also have inherent biases leading to false positives. What are you doing to address this?
  16. What would Facebook do if asked by friendly governments to interfere with other countries’ elections?

Since Facebook touches our lives in various ways, it is your right to ask your questions through your senators and representatives. Feel free to ask questions below as well.

So, what questions do you have for Mark Zuckerberg of Facebook?

5 Questions to Ask About Artificial Intelligence

In 1956, John McCarthy, the father of Artificial Intelligence (AI), brought together expert thinkers from multiple disciplines to explore how machines could “mimic” certain human traits. These expert thinkers came from the fields of Computer Science, Engineering, Logic, Mathematics and Psychology and wanted to find out how machines could:

  1. Use language
  2. Form abstractions and concepts
  3. Improve problems reserved for humans
  4. Improve themselves

Today, the field of AI also draws from the fields of Linguistics, Philosophy, Statistics, Economics and others. Due to the advancements and inclusion of various fields, the definition of what AI is has also evolved. What was once considered AI, is now considered just one of many things a computer system does. In my view, AI is a capability and thus a computer system that can independently solve routine and non-routine problems through self-learning has AI capabilities. These capabilities of a computer system can range from Object Character Recognition (OCR), Natural Language Processing (NLP), Computer Vision, Motion Manipulation (in Robotics) and others.

Under the hood, AI-capable computer systems are a combination of algorithms, data, hardware and software. When writing algorithms and eventually code for AI, software developers cannot really take into account all the various scenarios a computer system might encounter and what to do in those scenarios. Thus, AI-capable computer systems are coded in a way where they can learn from experience through training by using baseline datasets and then extrapolating them to other scenarios.

However, the problem with creating AI-capable computer systems is that these systems are still highly dependent on the quality of the underlying algorithms and the datasets, both of which can be created/provided by humans. As humans, we are prone to biases in not only creating algorithms but also in incomplete data that can create AI-capable computer systems that are biased and would be making incorrect decisions.

For organizations that are looking to improve themselves, AI-capable computer systems can be used to help enhance customer experiences, improve operations and provide insights for making decisions. On the flip side, AI-capable computer systems that have weak algorithms and/or bad data can result in horrible decision-making. Now that we understand what is AI and how it can potentially be used, lets ask the following questions:


In the Future

Who is creating the underlying algorithms and cleaning the data?

Who should be creating the underlying algorithms and cleaning the data?
What happens when AI-capable computer systems make bad decisions? What should happen when AI-capable computer systems make bad decisions?
Where AI-capable computer systems are relevant for decision-making? Where should AI-capable computer systems be relevant for decision-making?
When is data being acquired? When should data be acquired?
Why AI-capable computer systems are being used?

Why AI-capable computer systems should be used?

As we can see, the human factor in AI-capable computer systems is a real threat/opportunity. And while we are far away from creating sentient beings that are capable of general intelligence, right now we do have AI-capable computer systems that can perform narrower tasks better than humans. What this means is that today and in the near future specific tasks would be given to these AI-capable computer systems rather than humans. Keeping this in mind, organizations and governments are trying to figure how to address this AI wave and put programs in place when certain jobs would go extinct.

Artificial Intelligence - Algo + Data

Top 5 Articles of 2017

Thank you to the readers in 106 countries that read my articles in 2017. Following are the top 5 articles that you have been interested in:

  1. Is Internet a Distributed System?
  2. What is the relationship between Cloud Computing and Service Orientated Architecture (SOA)?
  3. How to select an Enterprise Architecture Framework?
  4. Healthcare.gov – Who is at fault?
  5. 5 Questions to Ask About Your Business Transformation

Following are the top 20 countries where most readers have come from:

  1. United States
  2. India
  3. Canada
  4. United Kingdom
  5. Pakistan
  6. Australia
  7. Philippines
  8. Germany
  9. Malaysia
  10. Netherlands
  11. Singapore
  12. Sweden
  13. France
  14. South Africa
  15. South Korea
  16. Saudi Arabia
  17. Brazil
  18. Indonesia
  19. Switzerland
  20. Spain

75 Questionable Thoughts About Organizational Transformation

Questionable Thoughts

  1. Strategies
    • People
      • Only executives can transform organizations
      • Internal expertise have no/minimum value
      • Everyone will be eager to contribute
    • Processes
      • Internal business processes and Information Technology (IT) processes don’t matter
      • Interfacing processes with partners and vendors don’t matter
      • All processes and standard operating procedures (SOPs) have been documented and followed without deviation
    • Products
      • Our products can’t serve beyond current client industries
      • We don’t need to look for best-of-breed products
      • We don’t need product evaluation feedbacks from customers, employees, partners and vendors
    • Services
      • Employee experiences are not important
      • Customer experiences are not important
      • Partners and vendors’ experiences are not important
    • Technologies
      • IT doesn’t need to get involved
      • Shadow-IT doesn’t exist
      • IT is just an enabler
  1. Politics
    • People
      • All title-holders have the same power
      • Only leaders can be the go-to people
      • There are no biases at play
    • Processes
      • We always have fair ways of making decisions
      • We always methodically assess power and its affects
      • Power-grabs don’t happen
    • Products
      • Personal experiences don’t affect product selection
      • Personal experiences don’t affect product selling
      • Personal experiences don’t affect product development
    • Services
      • There is no correlation between employee services and customer services
      • Customer services don’t affect partners and vendors
      • Unconscious favoritism doesn’t happen during decision-making
    • Technologies
      • Technologies keep us unbiased
      • The ecosystem of technologies ends within organizational boundaries
      • IT can’t help
  1. Innovation
    • People
      • There is no correlation between organizational innovation and individuals being innovative
      • The innovation and experimentation of partners and vendors don’t affect us
      • People need to explore being innovative in their own time
    • Processes
      • Incremental and disruptive innovations follow same processes
      • Can’t learn from others failures
      • Innovation doesn’t require a methodical process
    • Products
      • A particular department/individual is responsible for innovation
      • Innovation of others doesn’t affect us
      • There is no need to have feedback loops from employees, customers, partners and vendors
    • Services
      • Only customer services can improve customer services
      • There is no need to test and improve customer service journeys
      • Wise to follow industry status quo standards
    • Technologies
      • There is no innovation left in technologies
      • Adapting technologies is the easiest thing to do
      • IT is a cost center and doesn’t require an innovation budget
  1. Culture
    • People
      • Only executives can set the cultural norms
      • External environments don’t affect culture
      • Culture is only about people
    • Processes
      • Business processes and IT processes don’t create culture
      • Culture is unquantifiable
      • Culture isn’t a learned behavior
    • Products
      • Culture doesn’t impact the products we buy
      • Culture doesn’t impact the products we sell
      • Culture has no implications on product development
    • Services
      • Culture doesn’t impact the services we buy
      • Culture doesn’t impact the services we sell
      • Culture has no implications on employee and customer journeys
    • Technologies
      • Technologies can’t augment culture
      • Technologies can’t destroy culture
      • Culture-clashes need to be normalized
  1. Execution
    • People
      • Preparing sponsors, champions and leaders isn’t necessary
      • Only a handful need to know about the overall strategy
      • Layoffs are on the table
    • Processes
      • No business processes and IT processes need to be adopted for transformation
      • No business processes and IT processes need to be adapted for transformation
      • No business processes and IT processes need to be abandoned for transformation
    • Products
      • Don’t need to learn and quantify how products succeeded
      • Don’t need to learn and quantify how products failed
      • Customer, employee, partner and vendor product usage has no relevance
    • Services
      • Customer experiences isn’t a priority to execute strategy
      • Employee experiences isn’t a priority to execute strategy
      • Don’t need to map the gaps of experiences
    • Technologies
      • Technologies can’t be used to execute strategy
      • Technologies can’t be misused to execute strategy
      • Technologies aren’t and can’t be ingrained into every aspect of executing strategy