2 Takeaways from the 2018 Spring Meetings by the International Monetary Fund (IMF) and the World Bank

Every year the IMF and the World Bank hold a conference-style event that is referred to as the Spring Meetings. These Spring Meetings bring together central bankers, ministers of finance and development, private sector executives and academics to discuss global issues such as global economy, international development and the world’s financial markets.

This year I had the opportunity to attend the 2018 Spring Meetings where discussions were held about threats and opportunities of technological changes as it affects global economies and policies. Here are 2 takeaways from the 2018 Spring Meetings focused on technology and innovation including some of my related articles:

    •  Industrialization Paradigms
      • Typical Industrialization: Agriculture → Manufacturing → Services
      • Current Industrialization: Agriculture → Services
    • Impacts of Technology
      • Technological Changes → Job loss → Re-skill → New Jobs
      • Some jobs will never be recovered
      • Flow of technology and expertise doesn’t flow easily across countries
      • Even within countries technological impacts are uneven causing inequality
      • A good balance between data privacy and business models is needed that benefits societies at a larger scale
      • Depending upon where innovation (internal or external) to the organizations is can impact society at different levels
      • A good balance of foundations and advance education is needed
      • Specialized knowledge can negatively impact holistic societal impacts
    • Artificial Intelligence (AI)
      • Dystopian Views: AI will take over most human activities and would rule over humans
      • Middle Ground Views: AI will augment and enhance human activities but never replace humans
      • Utopian Views: AI will take over most human activities that would free up time for humans to do other things
    • The Brave New World of Data
      • Data quality issues are borderless
      • Standard data definitions of economic data has to be agreed upon and used
      • Data is being used to build economic policies
      • Data is being used to create multinational economic blocs
      • Data is being used to assess the humming of the global economy
      • Data Standardization and Harmonization àData Transparency àData Accountability
    • For economic prosperity, no organization, country, region is an island in of itself
    • Bridges need to be created across, public, private, academic, non-profit and shareholders
    • Regulations are slow to adopt to technological advancements and can be too heavy-handed or light-touch if not properly understood by policy makers
    • Grassroots changes are affecting how governments function and adapt
    • Technology and innovation should have executive level consideration across all branches of government and not just a ministry or a few people

Bonus: IMF’s Innovation Lab (iLab)

IMF has created the iLab whose goal seems to be to look at how technology and innovation is affecting the global economy and economic policies in various countries.

Related Articles:



World Map Data


5 Questions to Ask About Artificial Intelligence

In 1956, John McCarthy, the father of Artificial Intelligence (AI), brought together expert thinkers from multiple disciplines to explore how machines could “mimic” certain human traits. These expert thinkers came from the fields of Computer Science, Engineering, Logic, Mathematics and Psychology and wanted to find out how machines could:

  1. Use language
  2. Form abstractions and concepts
  3. Improve problems reserved for humans
  4. Improve themselves

Today, the field of AI also draws from the fields of Linguistics, Philosophy, Statistics, Economics and others. Due to the advancements and inclusion of various fields, the definition of what AI is has also evolved. What was once considered AI, is now considered just one of many things a computer system does. In my view, AI is a capability and thus a computer system that can independently solve routine and non-routine problems through self-learning has AI capabilities. These capabilities of a computer system can range from Object Character Recognition (OCR), Natural Language Processing (NLP), Computer Vision, Motion Manipulation (in Robotics) and others.

Under the hood, AI-capable computer systems are a combination of algorithms, data, hardware and software. When writing algorithms and eventually code for AI, software developers cannot really take into account all the various scenarios a computer system might encounter and what to do in those scenarios. Thus, AI-capable computer systems are coded in a way where they can learn from experience through training by using baseline datasets and then extrapolating them to other scenarios.

However, the problem with creating AI-capable computer systems is that these systems are still highly dependent on the quality of the underlying algorithms and the datasets, both of which can be created/provided by humans. As humans, we are prone to biases in not only creating algorithms but also in incomplete data that can create AI-capable computer systems that are biased and would be making incorrect decisions.

For organizations that are looking to improve themselves, AI-capable computer systems can be used to help enhance customer experiences, improve operations and provide insights for making decisions. On the flip side, AI-capable computer systems that have weak algorithms and/or bad data can result in horrible decision-making. Now that we understand what is AI and how it can potentially be used, lets ask the following questions:


In the Future

Who is creating the underlying algorithms and cleaning the data?

Who should be creating the underlying algorithms and cleaning the data?
What happens when AI-capable computer systems make bad decisions? What should happen when AI-capable computer systems make bad decisions?
Where AI-capable computer systems are relevant for decision-making? Where should AI-capable computer systems be relevant for decision-making?
When is data being acquired? When should data be acquired?
Why AI-capable computer systems are being used?

Why AI-capable computer systems should be used?

As we can see, the human factor in AI-capable computer systems is a real threat/opportunity. And while we are far away from creating sentient beings that are capable of general intelligence, right now we do have AI-capable computer systems that can perform narrower tasks better than humans. What this means is that today and in the near future specific tasks would be given to these AI-capable computer systems rather than humans. Keeping this in mind, organizations and governments are trying to figure how to address this AI wave and put programs in place when certain jobs would go extinct.

Artificial Intelligence - Algo + Data

5 Questions to Ask About Your Big Data

In statistics, a hypothesis is proposed and then data samples are collected to prove or disprove the hypothesis with acceptable confidence levels. For example, lets say that all our customers are aware of all our product lines. Basically, there are two ways of assessing our hypothesis that includes: (1) Proving our hypothesis and (2) Disproving our hypothesis.

The first way to proving our hypothesis is that we communicate with all of our customers and inquire if they know all our product lines. The second way is to communicate with as many customers as possible until we come across any customer that does not know all our product lines. From this example, we can see that if we find even one customer then that disproves our hypothesis. Thus, this is the reason why in statistics, sometimes it is easier to find an exception to disproving a hypothesis rather than to proving it.

Big Data on the other hand, inverts the generally acceptable process from hypothesis then data sample collection to data collection then hypothesis. What this means is that Big Data emphasizes on collecting data first and then coming up with a hypothesis based on patterns found in the data. Generally speaking, when we talk about Big Data, we are concerned with the 3 Vs that include:

  • Volume – Amount of data
  • Velocity – Rate of data analysis
  • Variety – Different data sources

Some have indicated that we need to go beyond just the above three Vs and should also include:

  • Viscosity – Resistance to the flow of data
  • Variability – Changes in the flow changes of data
  • Veracity – Outlier data
  • Volatility – Validity of the data
  • Virality – Speed at which data is shared

I would take the Big Data concept a bit further and introduce:

  • Vitality – General and specific importance of the data itself
  • Versatility – Applicability of data to various situations
  • Vocality – Supporters of data-driven approaches
  • Veto – The ultimate authority to accept or reject Big Data conclusions

For a metrics-driven organization, a possible way to determine the effectiveness of your Big Data initiatives is to do a weighted rating of the Vs based on your organizational priorities. These organizational priorities can range from but not limited to increasing employee retention rates, improving customer experiences, improving mergers and acquisitions activities, making better investment decisions, effectively managing the organization, increasing market share, improving citizens services, faster software development, improving design, becoming more innovative and improving lives. What all of this means is that data is not just data but it is in fact an organization’s most important asset after its people. Since data is now a competitive asset, lets explore some of the ways we can use it:

  • Monte Carlo Simulations – Determine a range of scenarios of outcomes and their probabilities.
  • Analysis of Variance (ANOVA) – Determine if our results change when we change the data
  • Regression – Determine if data is related and can be used for forecasting
  • Seasonality – Determine if data shows the same thing occurring at the same intervals
  • Optimization – Getting the best possible answer from the data
  • Satisficing – Getting a good enough answer from the data

Now that we understand what is Big Data and how it can be used, lets ask the following questions:


In the Future

Who is capturing data? Who should be capturing data?
What is the lifecycle of your data? What should be the lifecycle of your data?
Where is data being captured? Where should data be captured?
When is data available for analysis? When should data be available for analysis?
Why data is being analyzed? Why data should be analyzed?

Having discussed the positives of Big Data, we have to realize that it is not a panacea and has its negatives as well. Some of the negative ways data can lead to bad decisions includes: (1) Data is correlated but that does not imply cause and effect, (2) Data shows you pretty pictures but that does not imply it is telling you the truth and (3) Biases can affect data anywhere from capturing to analysis to decision-making.

In conclusion, what this means is that the non-distorted quality, understanding and usage of data is the difference between just getting on the Big Data bandwagon or truly understanding how data can fundamentally change your organization.

Big Data Vs


  1. Realizing the Promise of Big Data
  2. Beyond the three Vs of Big Data
  3. 5 Factors for Business Transformation
  4. 5 Questions to Ask About Your Business Processes
  5. 5 Questions to Ask About Your Information
  6. 5 Questions to Ask About Customer Experiences
  7. 5 Observations on Being Innovative (at an organizational level)
  8. Where is my Big Data coming from and who can handle it

Where is My Big Data Coming From and Who Can Handle It

Recently, a reader asked my insights on the article (Data Scientists are the New Rock Stars as Big Data Demands Big Talent).  Here is my response.

It seems like in today’s world people and organizations are somewhat struggling with this big data concept and do not know where to begin. Due to this reason they are collecting everything they can think of in the hopes that one day they will be able to use this data in a meaningful way such as better customer experience, new products/services, better collaboration, increasing revenue etc. This hope approach of “lets collect data and later decide what we can use it for” on the surface might seem sound but last I checked hope is not a strategy. Perhaps this is one of the reasons that even now only <1% of data collected is actually being analyzed. What good is more data when one cannot even make sense of the other 99%+ of data it already has. Are we chasing a ghost?

While it is true that vast amounts of data are and will be generated from financial transactions, medical records, mobile phones and social media to the Internet of Things but there are questions that need to be asked to understand data’s meaningful use:

  1. How will data be managed?
  2. How will data be shared?

I believe that in order to come to a point where data becomes meaningful and useful it would require (broadly speaking) three phases:

  1. Establishment of standards, governance, guidelines. (E.g., open architectures)
  2. Creation of industry specific data exchanges. (E.g., healthcare data exchanges, environment data exchanges etc.)
  3. Creation of cross-industry data exchanges. (E.g., healthcare data exchanges seamlessly interacting with environmental data exchanges etc.)

Additionally, lets keep this in mind that the data we are talking about is data that can be captured by current tools and systems but the data which is perhaps the most difficult to capture is unstructured human data which within organizations is called Institutional Knowledge. This does not reside in a document or a system but in the minds of the people of an organization who understand what needs to be done in order to move things forward.

So, the question becomes, do we really need Data Scientists who have a mix of coding skills with PhDs in scientific disciplines and business sense or do we need someone who is able to connect the dots and have the ability to create the future. The answer is not a simple one. Perhaps you need both. The ability to code should not be the deciding factor but rather the ability to leverage technology and data should be. I agree that there is shortage of people with diverse talent but there is also shortage of people who actually know how to leverage this kind of talent.

Before organizations go on a hiring spree they should consider:

  1. Why do they need a Data Scientist? (E.g., have strategic intent, jumping on the bandwagon etc.)
  2. Who will the Data Scientist report to? (E.g., Board, CEO, CFO, COO, CIO etc.)
  3. Does the organization have the ability to enhance/change its business model? (E.g., making customers happy, leading employees etc.)
  4. Is the Data Scientist really an IT person with advanced skills or does s/he have advanced skills and happens to know how to leverage technology and data?
  5. How often will you measure the relevancy of the data? (E.g., key data indicators)
3 Phases of Big Data Harmonization

3 Phases of Big Data Harmonization