A Review of President Joe Biden’s American Rescue Plan

On January 14, 2021 President Joe Biden released his new administration’s USD $1.9T plan called the American Rescue Plan. This plan shows the priorities of the new White House administration and calls for Congress to help the American people in many ways. Here are some of the most repeated words in this plan:

American Rescue Plan

Suprisingly, at the end of the American Rescue Plan, there is a section that asks for increased funding to “Modernize federal Information Technology (IT) to protect against future cyber attacks”. Here are my thoughts on it:

Expand and Improve Technology Modernization Fund (TMF)

For now and the foreseeable future, technology will continue to have a direct impact on our lives. Nowhere is this more evident than in the U.S. Federal Government which is the largest purchaser of technology in the world. Recognizing this, in 2017, the Technology Modernization Fund (TMF) was created whose mission is to help U.S. Federal Government Departments and Agencies use technology efficeintly, effectively and securely.

TMF receives proposals from different U.S. Federal Government Departments and Agencies in what they want to accomplish. After vetting the proposals, TMF provides funding to the U.S. Federal Government Departments and Agencies who have a 5-year window to return the funds. TMF receives its own funding from Congress. To understand how much funds have been approved for TMF, lets look at the budget requests versus the appropriations received in the table below:

Fiscal YearBudget RequestAppropriations Received
2021$150M$25M
2020$150M$25M
2019$210M$25M
2018$228M$100M
TMF Budget Requests vs. Appropriations Received

As we can see from the above table, TMF has never received the budget it has requested even though TMF’s mission is basically to benefit the American public through technology.

In regards to cybersecurity, in 2018, Cyber Security and Information Security Agency (CISA) was created whose mission is to lead at the national level to mitigate cyber and physical risks to vital infrastructure. To understand how much funds have been approved for CISA, lets look at the budget requests versus the appropriations received in the table below:

Fiscal YearBudget RequestAppropriations Received
2021$1.7B$2.25B
2020$3.1B$2B
2019$3.3B$1.68B
CISA Budget Requests vs. Appropriations Received

As we can see from the table above, CISA received more funds than they asked for because Congress felt the cybersecurity is important especially when it came to the elections.

The American Rescue Plan presented by President Joe Biden requests Congress to increase the combined appropriations to be $9B which will be used to increase IT shared services and cybersecurity across the U.S Federal Government. This plan also recommends removing the need of U.S. Federal Government Department and Agencies to reimburse the funds back to TMF witin 5 years. I think these are good firsts steps. However, we have to also see these $9B funds as an opportunity to:

  1. Hire the right people/contractors to do the right jobs
  2. Prioritize and optimize processes before cloud migration and automation
  3. Apply technology at the right time with redundancy and security built-in
  4. Create services that improve experiences for the American public
  5. Create products that make government become more digital

Surge Cybersecurity Technology and Engineering Expert Hiring

The Office and Management Budget (OMB) is responsible for managing the Information Technology Oversight and Reform (ITOR) fund which produces quarterly reports on Government-wide IT reform efforts to save on cost of operations U.S. Federal Government Departments and Agencies. The American Rescue Plan also asks for $200M in hiring cybersecurity professionals to improve the U.S. Federal Government’s cybersecurity efforts across the government.

The problem here is two-fold:

  1. IT is always asked to do more with less. This is also true in government. Sometimes IT is asked to produce cost-savings, however, IT does not and should not operate in a bubble. This means that if you truly want to transform an organiztion then the motivation should not be cost savings only. Motivation should be efficient and effective processes augmented by IT which results in frictionless operations that directly provide value to the American public.
  2. While hiring cybersecurity professionals is a line of defense but it is not the only one. Cybersecurity needs to be a whole of government approach which requires cultural change in different U.S. Federal Government Departments and Agencies. This change requires everyone in these organizations to be vigilant and mechanism to be put in place that not only trains people but unexpectedly tests them too.

In both of the above issues, culture plays a very important role.

Build Shared, Secure Services to Drive Transformation Projects

The mission of the Technology Transformation Services (TTS) under the General Services Administration (GSA) is to create a digital government that can benefit the American people. They do this through various programs and services provided to different U.S. Federal Government Departments and Agencies. The American Rescue Plan has asks for $300M so that TTS can create more programs and services that can be used. In order for this to work:

  1. We have to make sure that different U.S. Federal Government Department and Agencies see TTS as not the last resort but the first option to consider.
  2. While TTS’ solutions in Data and Analytics, Innovation, Public Experience, Secure Cloud, Smarter IT, Cloud.gov, Login.gov and other Free and Low-Cost Tools are great but they are not enough. I think TTS should expand its mandate to become a “connector” between different agencies so that redundancies can be reduced and lessons learned can be shared.

Improving Security Monitoring and Incident Response Activities

The American Rescue Plan asks for an additional $690M for CISA to improve shared cybersecurity and continue to moving towards the cloud. What the SolarWinds debacle has taught us is that not only we are highly reliant on technology but also on its interconnectness. This means that to tackle security monitoring and incidence response, we have to to think holistically. This requires us to start with the basics:

  1. Determine the number of legacy systems government-wide
  2. Determine how often those legacy systems are updated/patched
  3. Determine if those legacy systems are too big to fail
  4. Determine what (budget, expertise, time) is missing to replace legacy systems
  5. Determine what data is fed/received from those legacy systems

Other Areas Where IT can Help

IT can help in other areas mentioned in the American Rescue Plan too. Here are a few suggestions:

  1. For a national vaccination program, contain COVID-19, and safely reopen schools:
    • Create a system that automatically pulls data from the COVID-19 tracking systems of States, Localities, Tribes and Territories. Use this data to determine which areas have the fastest rates of spread and then deploy/create vaccination centers in those areas first. Encourage States, Localities, Tribes and Territories to share thier mitigation steps and result in this portal as well so that lessons can be learned. Make this system publically available.
    • Create a system that tracks what tests have been done, what tests needs to be done, frequency of tests required and what States, Localities, Tribes and Territories did after tests were administered.
    • Create a system that pulls in contact tracing data from all States, Localities, Tribes and Territories. Use this data to determine movement of the virus across jurisdictions.
    • Create a system that tracks what communities are underserved.
    • Create a system that tracks supplies, usage, disposal and waste.
    • Create a system that automatically tracks virus data from other countries, it spread rate and any mitigatrion strategies that proved helpful in reducing infections.
  2. For delivering immediate relief to working families bearing the brunt of this crisis:
    • Create a system that takes into account the cost of living for government assistance so that States, Localities, Tribes and Territories support can effetively be augmented by the U.S. Federal Government.
    • Create a system that tracks assistance of healthy nutrition options in relation to health issues in that community.
    • Create a system that tracks which organizations do not provide paid sick leave to their employees.
    • Create a system that tracks which organizations do not give $15/hour to thier employees.
    • Create a system that tracks which organizations do not provide back hazard pay to their employees.
  3. For providing critical support to struggling communities:
    • Create a system that provides seed funding to individuals especially people of color to easily start businesses.
    • Create a system that incentivizes and tracks States, Localities, Tribes and Territories public transit efforts.
Advertisement

Technical Chops Of The Democratic Presidential Candidates

On November 3, 2020, The United States of America will hold its presidential election. This presidential election will determine if Republican President Donald J. Trump gets another 4 years in office or if there will be a new Democratic President. The Democratic Presidential Candidates cover a lot of topics that they think are of interest to the American public.  For me that topic is technology. Specifically, the technology policies, the technology uses and the technology abuses in the private and public sectors.

Everything we do today and the foreseeable future is either directly, or indirectly related to technology. Thus, in this post, I am going to go through each Democratic Presidential Candidate’s campaign pages to know what they are saying about technology and then provide my own views. Here it goes…

In My Point of View:

The United States needs data privacy legislation at the federal, state and local levels. In order to create data privacy legislation, all levels of government and industry have to:

  1. Define what data is and isn’t
  2. Who (companies, consumers, government) have this data
  3. How data privacy legislation would apply when data is captured, at-rest, in-motion, in-between systems/apps, etc.
  4. Create global alliances across countries and regions
  5. Develop a course of action when agreed-upon rules are not followed

Let’s keep this in mind that even though Europe has the General Data Protection Regulation (GDPR) and California has the California Consumer Privacy Act (CCPA), currently, there is no data privacy legislation that is 100% global in nature.

In regards to taxing the organizations that sell consumer data, while on paper it seems alluring but the problem is that when most consumers sign up for ‘free’ services online (i.e., social media, email, etc.), they essentially agree to however the organization likes to use their data. Also, some organizations could avoid data taxation if they simply store and sell the data in a country that doesn’t tax them on data transactions. This, in turn, can create more problems for the safekeeping of the data.

In regards to putting extra government fees on megadeals (i.e., mergers, acquisitions, etc.) would although make the budgets bigger for regulatory agencies but, on the flip side, megadeals could become a rubber stamp just to collect higher government fees. In a megadeal, when organizations have to figure out if their deals would affect current and future competition, this would require a tremendous amount of time and resources whose costs might be passed on to the consumer in either price and/or more detailed data collection.

If a government agency is tasked with breaking up tech, this would require a big budget and expertise to truly understand what is happening in tech and it’s nuances in these companies. Asking these agencies to go break tech up would just create a mess especially when these companies always have the option to operate from another country whose rules might be more relaxed. Additionally, the government doesn’t pay well and to think that super-smart people will work for the government their whole careers are just foolhardy.

In My Point of View:

The Green New Deal focuses on creating technologies that can tackle climate change. While this is a good approach, I think in order to make it stronger, it is essential to look at the current impact of technology on consumers, how technology is marketed to consumers and the waste technology creates when it comes to energy consumption and physical materials harvested from the Earth. We also have to look at how recycling of technology works. Recycle should not be just a collection of technology waste and disposal, but it should be a 360-degree approach where the emphasis is on reusing old technology and technology parts. Also, we have to consider the impact to jobs when moving to a 100% green economy. The government could provide free training and job training which could help reduce some anxiety.

In regards to Broadband, it should be a fundamental right for every person to have access to high-speed Internet. While the government can help in creating the incentives to create the infrastructure for it, we have to be reminded that the monopoly of internet access providers is a very real threat.

In My Point of View:

Information and disinformation tactics have been used for a long time throughout human history. These tactics have taken on a new face in today’s digitally connected world. The idea that anyone can start disinformation on any social media website with a few clicks is concerning. Ideally, the private sector and public sectors would put checks and balances in place to monitor and ensure disinformation is not used. However, it is a threefold problem where disinformation production, disinformation consumption, and disinformation monitoring have to be dealt with equally. As humans, we are prone to biases and these get amplified once we are online. Additionally, we have to note that most social media organizations are for-profit entities and thus there are no incentives for these organizations to make disinformation dissemination a priority.

In regards to breaking up tech, to spur innovation and competition seems good on paper but what is essentially being said is that if an organization reaches a certain size then the government will look into breaking them up. This idea seems anti-capitalistic. Tech is an ecosystem and breaking up tech means disrupting that ecosystem. To be clear, because of these tech ecosystems, many small businesses have also emerged. Think about the small businesses that are able to advertise on Google to anyone in the world, think about small businesses that use Amazon to sell their products to a wider audience, think about small businesses that have used Facebook as a place to test their marketing strategies at a bare minimum cost. The ripple effects of a tech breakup have to be understood and studied thoroughly before going this route. Additionally, due to global reach and connected, tech is not bound to one geographical location. These tech organizations can simply pack their bags and move to more tech-friendly countries which means that not only will there be job loss but also brain drain.

In My Point of View:

For the climate change revolution to take place, we need to look at energy production as well as energy consumption. We can’t out-tech our way out of the imminent climate disaster. We have to look at energy holistically which means to make tough choices when it comes time to do so. But these tough choices don’t have to be at the expense of anyone. While it is true that climate change revolutions will create many jobs but what about the jobs that would be lost. We have to provide incentives for people to join the new green economy. No one should be left behind.

The future of Education requires us to think in terms of a lifetime approach to pursuing knowledge. In this pursuit, teachers, coaches, parents, and guardians play an important role in addition to the environment that we create for the students. To hamper a student’s lifetime success simply because they were born in certain zip codes is simply, cruel. Everyone should have the ability to pursue knowledge physically and/or virtual regardless of their situation. This is where technology comes into play. Technology can be the great equalizer not only in terms of pursuing knowledge online but also in terms of making students globally competitive. We have to teach not only the ability to use technology but teach the ability to enhance, modify, develop, and extrapolate what technology can do.

  • Michael Bloomberg
  1. Infrastructure
  2. All-In Economy

In My Point of View:

The US needs to update its infrastructure and create new infrastructure that enhances the quality of life for all its residents. Infrastructure is not only about roads, bridges, and transportation but it’s about technology as well. Technology infrastructure means fiber optics, networking switches, broadbands, various types of clouds and software. As long as we don’t include technology as part of overall infrastructure goals, we will surely become obsolete sooner than later.

In regards to creating jobs of the future, we have to make a decision about what future we want. A future without considering the effects of technology will not be a future at all. In the long term, most jobs can and will be replaced by technology. The question is not if but when and when is happening right now. The people who will be displaced are tremendous and its high time we take our heads out of the sand. As technology becomes more commoditized, jobs will be for people who not only understand the technology but who can also connect the dots through technology.

  • Pete Buttigieg
  1. Education
  2. Building for the 21st Century

In My Point of View:

When it comes to looking at the economy as a whole and other countries are doing. Providing technology education is important. What is also important is not losing those who pursue higher education in the US and then are forced to leave to their home countries. These people in those countries then compete directly with the US. This process can’t continue. Technology education can unlock the potential of a generation but we can’t forget those who will be left behind.

In regards to building for the 21st century, we have to think about where we are, where we want to be and what it will take in terms of initiatives from federal, state, local, non-profit, for-profit and academia. We have to think not only in terms of physical things but we also have to look at the happiness of our residents and the positive effects we can create for the environment.

In comparison, here are the technological achievements of President Trump so far.

Final Thoughts

While all of the above technology-related topics are important but what we are missing is a comprehensive National Digital Strategy that is agreed upon at the federal, state and local levels. What we need are legislators and regulators who understand the power of technology. What we need are people who know that technology can change the economy and even the government.

Processing…
Success! You're on the list.

5 Questions To Ask About Enterprise Architecture (EA)

In 1987, John Zachman published an article in the IBM Systems Journal called A Framework for Information Systems Architecture which laid the formalized foundation of Enterprise Architecture. In the 1990s, John Zachman further develop the idea to classify, organize and understand an organization by creating The Zachman Framework™. The Zachman Framework™ talks about understanding an organization in terms of:

  1. Data
  2. Function
  3. Network
  4. People
  5. Time
  6. Motivation

Today, the field of Enterprise Architecture (EA) also draws from the fields of Engineering, Computer Science, Business Administration, Operations Research, Psychology, Sociology, Political Science, Public Administration, and Management. Due to the advancements and inclusion of various fields, the definition of what EA is continues to evolve depending upon if you are a practitioner, academic, vendor or government but the basic premise of Enterprise Architecture is to holistically understand the entire organization to make management decisions.

In addition to The Zachman Framework™, there are many other EA frameworks that have emerged over the years to help an organization understanding where they are (current state or as-is), where they want to be (future state or to-be) and what steps (transitions) they should take to get to the future. Some of these EA frameworks include:

  1. The Open Group Architecture Framework (TOGAF)
  2. Federal Enterprise Architecture Framework (FEAF)
  3. Department of Defense Enterprise Architecture Framework (DoDAF)

To be clear, EA is not only about frameworks but its also about the EA methodology, tools, artifacts, and best practices. As you develop EA within your organization, you will realize that not all frameworks and tools would fit perfectly but it is a continuous improvement over time. Regardless of the size of the organization, EA can help create a holistic thinking mentality, optimize business processes and improve decision-making.

By now you might be thinking that of course, EA is the answer to your woes. But hold on! Before you jump into EA, it is critical to know: 1) The term EA and its jargon can confuse people, 2) EA is about the entire enterprise (aka organization) and not about just certain functions of the organization, 3) People working under the EA function should have a complete grasp of Business operations and IT capabilities, 4) EA is not an IT activity and 5) EA’s purpose is to communicate what is happening and what could happen.

For organizations, EA is like an overarching umbrella which when used effectively can have a profound impact but if used incorrectly can turn into a burden to carry. Keeping these things in mind, let’s ask the following questions:

Today

Tomorrow

Who is demanding the need for EA and who is creating it?

Who should be demanding a need for EA and who should be creating it?
What if EA fails?What should happen when EA fails?
Where EA is helping in decision-making?Where EA should help in decision-making?
When EA artifacts are being collected?When should EA artifacts be collected?
Why EA is being used?

Why EA should be used?

As we can see, whoever sees a need for EA matters, EA champions within various organizational functions matters, EA execution matters, EA measurement matters and EA best practices for organizational-wide improvement matters. It should be noted that all organizations do EA in some way (unformalized, semi-formalized or fully formalized).

Processing…
Success! You're on the list.

A Voice Over Internet Protocol (VoIP) Solution

Credits: Alex, Arsalan Khan, Dan Hopkins, Eddie Heironimus and Uzair Khan

1. EXECUTIVE SUMMARY

This report provides the Chief Information Office (CIO) of Citadel Plastics (CP)  – a fictional organization – recommendations and justifications that would help her make procurement decision on selecting a Voice over Internet Protocol (VoIP) solution. In this paper, we analyze the business and technology issues faced by the organization. Our team performs this analysis by identifying the current issues with the telecommunications environment across various worldwide locations and the future needs of CP. For this report, we have made the following assumptions:

General Assumptions

Business Assumptions

Technology Assumptions

  • Final decision is with the CIO to choose the VOIP solution
  • Various vendor business applications are flexible to connect with any other system
  • The sales offices have high-speed broadband connection while the remote sites do not
  • Each sales office has 15-20 users
  • Each manufacturing sure has 300-400 users but only a handful would be receiving CAD models
  • File Transfer Protocol (FTP) is used to exchange CAD models between engineering team in the sales office and manufacturing sites
  • CAD models are between 100MB to 300MB
  • Currently the mobile computing options are limited

Table 1: VoIP Solution Assumptions

Based on the above assumption and keeping in mind the future growth of CP, our team recommends the following two options to be considered for the purchase of a VoIP solution:

Benefits Risks

Costs

Option # 1

(Cloud)

  • Easy to set up and maintain
  • Simple plug and play functionality
  • Low Cost
  • Full featured functionality
  • No Quality of Service (QoS) on Internet traffic to cloud provider
  • Risk of provider outage (both technical and operational)
  • Lack of control over technical solution
  • Privacy/Security: exposure of call data to Cloud provider.
  • ·   Updates/changes to the cloud would impact our deployment
  • $24.99 per user per month for Standard account
  • $34.99 per user per month for Premium (for Salesforce.com integration)
  • $44.99 per user per month (10,000 toll free minutes) for Enterprise
Option # 2

(On-premise)

  • VoIP solution control
  • Maintenance and upgrades

Table 2: VoIP Solution Options

While both options have pros and cons, our team has determined that due to reliability considerations, on-premise VoIP solution is a better choice. We have assessed that even though in the short-term the on-premise VoIP solution is more expensive but in the long-term it would prove to be practical.

2. PROBLEM STATEMENT

The decision to deploy a VoIP solution can be a large hurdle for Citadel Plastics, especially for end-users that are habituated to our legacy systems of corporate communication. Aside from the difficulties involved in breaking the habit, old systems such as Public Switched Telephone Network (PSTN) and Plain Old Telephone Service (POTS) have a proven track record of being stable over a long time. Regardless, these systems should be labeled as outdated technologies that are no longer applicable to the business growth that we are experiencing. Given our increasing dependency to exchange data between our manufacturing sites and sales offices, it is imperative that we switch to a solution that increases our broadband capacity. Transitioning to a VoIP solution seems to be the dominant alternative, but our main analysis will be to determine which vendor is better suited to satisfy our business needs; consideration on how the transfer of CAD files is now as important as the point of sale in CP business model.

General assumptions:

  1. There are certainly many VoIP solutions in the market place we could cover but we will limit the scope to the best two in our report. The decision to pick one over the other is really a subjective one for the CIO as they all offer rather comprehensive feature support.
  1. All of the solutions we consider can interconnect with a large number of different interfaces, terminals and gateways depending on the requirements of a specific deployment, thus allowing a large amount of flexibility in business applications.

3. REQUIREMENTS

Our aim is to procure a solution that can 1)offer cost-effective and seamless communication to all our users, regardless of their role within CP, 2)have the ability to merge disparate technologies such as mobile platforms and web-aware business applications and 3)not simply enable efficiency by voice and data integration but leverage telephony implementations across our manufacturing and sales force. The following table shows CP’s different sales and manufacturing locations:

Sales Offices (15-20 people)

Manufacturing Sites (400 people)

Europe

Asia North America

South America

 
  • Dublin, Ireland
  • Frankfurt, Germany
  • London, UK
  • Madrid, Spain
  • Milan, Italy
  • Beijing, China
  • Tokyo, Japan
  • Bombay, India
  • Islamabad, Pakistan
  • Moscow, Russia
  • Mexico City, Mexico
  • Ottawa, Canada
  • Washington, DC
  • Brasilia, Brazil
  • Bogotá, Colombia
  • Santiago, Chile
  • Pretoria, South Africa

 

  • Haryana, India
  • Chandigarh, India
  • Dongguan, China
  • Guangdong, China
  • Tampa, Florida

 

Table 3: Citadel Plastics’ Locations

3.1 Technology Overview (current)

CP has a global presence with two types of offices around the world. The sales offices are located in major cities with access to high-speed Internet connections. The three manufacturing facilities are located in remote parts of the world with limited access to high-bandwidth. Currently the sales offices share their CAD files using FTP servers. There is no formal process in place and with the recent growth in business there have been a lot of file transfer delays.

Business Assumptions:

Based on the information, we have made the following assumptions:

Transfer Route

  1. The sales offices receive sales orders from customers via phone and the web.
  2. The engineering team creates the CAD files (100MB – 300MB) at the sales offices.
  3. Sales then sends CAD file to manufacturing site via an FTP server.
  4. Manufacturing site downloads the CAD file and builds the product.

WAN Connections

  1. The sales offices have T1 connections, 1.5 Mbps down and 1.5 Mbps upload speeds.
  2. The manufacturing sites have satellite connections with speeds from 1.5 Mbps download and 128 Kbps upload (which often experience delays).

Mobile Computing Options:

Every CP user operates onsite with limited mobile computing options. There are two shared stand-alone laptops at each sales site. These laptops are used by the sales staff for rare client-site meetings. Manufacturing facilities do not have any laptops onsite.  Additionally, no mobile phones are provided to the users.

Interoperability/Integration:

Voice and data integration is a critical part of the network design. CP has many internal employees that work at different locations around the globe. These users need to be able to quickly and easily communicate, collaborate and share their data. External customers need to be able to submit orders and discuss any issues via phone or email. However, the current design does not utilize current integration and automation technologies. Initially this was not a problem but with the recent growth in business, all members are experiencing issues. These issues range from transfer delays and voice quality issues when dealing with customers and vendors.

There are many application silos that have been created over time and have not been designed to share information easily. The three key types of information at CP are sales orders, email and CAD files. Sales orders are received via email or phone. There is a dedicated mail server at the Washington, DC location that handles email for the entire organization. The mail server is running Microsoft Exchange 5.0 on Windows Server 2003. Generally the mail flow is fine as long as Internet and power is available. However, the hardware is out of support and outdated. Additionally, there is a dedicated Internet connection at every location to the outside world.

For FTP transfers, users have access to a dedicated workstation with a dedicated layer 2 private line. This setup is installed at each location. Both the sales team and the engineering team complain about delays during FTP transfers. The delays are being caused by multiple factors. The lack of a queue causes the download links to receive numerous downloads at the same time. Most of the manufacturing facilities have lower connection speeds and cannot handle that load all at once. This causes frustration between the manufacturing and sales departments.

The voice network is entirely copper-based. Each site has a dedicated PBX with PRI lines that go out to the PSTN. The offices are using TDM phones with copper lines between the phone and the PBX. Though this is a traditional design, the phone company provides data, voice and power over the copper lines. This allows the phones to continue to run even when the local power company loses power. However, customers often complain about noise during phone calls and fast-busy signals and often resort to using their personal cell-phones.

Network Topologies:

The current network topology is shown here below:

VoIP Figure 1 - Current Network Diagram

Figure 1: Current Network Diagram

This topology is used at both the sales offices and the manufacturing facilities. The diagram shows a dedicated uplink/downlink to the Internet. The speeds of this link vary between the sales offices and the manufacturing facilities. However the topology remains the same.

The router is sent to a firewall, which is the only layer of protection from the outside world. Currently they are running a Juniper firewall with the default settings. There are no custom configurations on the firewall. The site is prone to attacks that cause some of the Internet outages at the sites.

The firewall has a dedicated switch, wireless access point, mail server and an FTP terminal for file transfers. Wireless access point has been turned off to help lower the load on the bandwidth.

As mentioned earlier, the voice communication is currently configured over copper. All the TDM phones at each site connect to a router that is directly connected to a PBX. The PRI provider installs and manages the equipment and the call routing. This consumes a lot of power. During peak business hours, customers complain about static and voice degradation. The following figure shows the current voice communication setup:

VoIP Figure 2 - Current Voice Communication Setup

Figure 2: Current Voice Communication Setup

Network Usage:

The T1 line at each sales office is over utilized. Users complain about transfer rates and slow Internet access during peak business hours. This causes a delay when building orders for customers over the phone. In addition, each sales office sends 10 orders (CAD drawings) to the manufacturing facilities per day. These files are uploaded via a T1 connection then downloaded from the manufacturing facility through a satellite connection.

Customers often complain about static and noise when calls are made from the office phones. This causes users to have to use their own personal cell-phones to make phone calls. After further investigation, the leading cause of the noise is due to the limited number of lines on the PTSN.

The manufacturing sites receive calls from the sales offices but are only able to make two outside calls at any time. The users have managed to make this work but often calls are missed and sales offices have to wait until next day to get their orders in.

Security:

CP utilizes a Juniper firewall in their current environment. All workstations are equipped with stand-alone instances of Symantec Antivirus. There were no managed instances of AV clients on the entire network. Local machines are configured with Windows Firewall but since all users have admin privileges, users often turn them off.

Security updates are pushed out manually and rarely ever verified. The vulnerability scanner reported 600+ security updates across the entire network. The doors to do the network closets are often kept open to help with ventilation. This is a liability as it allows easy access to the organization’s critical IT services.

Implementation:

The current implementation plan was not documented. Current managers of CP suggest that a couple of hardware guys that were not experts in network design did the implementation.

3.2 Technology Overview (future)

Regardless of which vendor we decide to procure for our VoIP solution, we need to acknowledge the variety of caveats inherent from a VoIP solution and define the scope as much as possible. What application? What platform? What protocols? We know VoIP is a broad term, describing many different types of applications installed on a wide variety of platforms using a wide variety of both proprietary and open protocols that depend heavily on your preexisting data network’s infrastructure and services. Therefore, we need to narrow the future technological overview of the VoIP solution we want to explore.

Because VoIP technology, as opposed to POTS, interacts with the Internet and can be configured in various types of network topographies, it is therefore very susceptible to unwanted attacks. According to David Persky, the evolution of VoIP is rid with vulnerabilities because  “the security aspect was an afterthought and as such, there has been this seemingly endless game of cat and mouse between security engineers and vendors fixing vulnerabilities.” Therefore we have to make sure that the future solution CP engages in considers the following preventive measures:  1) promotion of greater log analysis to provide a clearer vision of voice and data traffic, 2) implementation of regular 3rdparty VoIP penetration testing tools such as Nessus, 3) segmentation of data and VoIP traffic in separate Virtual Local Area Networks (VLANs) to ensure that the VoIP VLANs cannot be used to gain access to other data VLANS, and vice versa, 4) creation of firewalls to block all outbound traffic for known destination VoIP service ports, and 5) avoid a single line of failure by not putting the IPS inline with the VoIP traffic.

Some of the main vulnerabilities we will reduce from these measures are denial of service attacks (DOS), man-in-the-middle attacks, call flooding, eavesdropping, VoIP fuzzing, signaling, audio manipulation, SPIT or voice SPAM and Voice phishing attacks. When comparing these vulnerabilities with those from POTS, they share most of the vulnerabilities except the ones involving the web interface. Unlike our old POTS system, when you know a line is vulnerable when you are actually operating the telephone line, VoIP can be exposed to the previous vulnerabilities even when the line or device is inactive. Since VoIP integrates voice and data on the computer, it is possible to hack into the VoIP if the computer it’s connected to is online. This is accomplished because most users “overlook the fact that the VoIP phone can possess a web management Graphical User Interface (GUI), and can be compromised to then attack other VoIP and data resources, without placing any calls.” Still there are vulnerabilities in POTS that are also present in VoIP, these are Caller ID spoofing and VoIP toll fraud or phreaking.

Aside from sharing vulnerabilities, POTS and VoIP also share particular legislation that is applicable to both technologies. The two main pieces of legislation that the new solutions we adopt must comply with 1) the Communications Assistance for Law Enforcement Act (CALEA), which require carriers and Internet Telephone Service Providers (ITSPs) to have a procedure and technology in place for intercepting calls and 2) the Truth in Caller ID Act of 2007, which makes it unlawful for any person in the US to cause any caller identification service to transmit misleading or inaccurate information with the intent to defraud or cause harm. Based on these overall technological considerations, we can proceed to analyze our recommendations.

4. RECOMMEDATION # 1: Cloud VoIP Solution

The first solution we are recommending for consideration is a hosted PBX or “Cloud” based phone solution. There are a number of vendors that offer hosted PBX solutions that would enable a cost effective and simple VoIP solution, while also providing cutting edge technical features and functionality.

A hosted cloud provider would primarily offer CP the following benefits:

  1. No hardware: Beyond core network routers and switches, No PBX or other VoIP equipment would be necessary for the solution. This would reduce the Capital Expenditure requirements and implementation costs that buying an “in-house” VoIP solution would provide.
  2. Ease of deployment: the initial and subsequent deployment of physical phones is effortless with a cloud solution. CP can simply plug in a phone into the network and the phone uses DHCP to automatically configure itself for the network.
  3. Web based administration: A cloud-based solution is controlled by a web administration portal that allows for web based provisioning and administration from any Internet accessible computer.
  4. Full features and functionality: Most cloud solutions have cutting edge features and functions such as voice mail to email, automatic presence (availability) detection, etc. Additionally, as the company improves their offering or provides additional features, CP would be able to leverage these.
  5. End Point Options: Most cloud providers offer “soft” phones in addition to physical phones that can be installed on a computer or smart phone device giving a user many different options for making and receiving calls.
  6. CRM Integration:Somecloud providers would give CP the ability to seamlessly log calls into certain CRM solutions (like Salesforce.com) to provide for enhanced process efficiencies, tracking and reporting.

A cloud based VoIP solution however does pose some risks and challenges for CP. Primarily, these risks relate to call quality and outages. Since all calls have to route through the cloud provider, without a dedicated Multiprotocol Label Switching (MPLS) connection to the selected cloud provider, calls would route over the public Internet and there are no QoS guarantees outside of CP controlled networks. Additionally, any outage impacting the cloud provider would inherently impact CP so proper and thorough due-diligence is needed during vendor selection.

4.1 Cloud VoIP Solution Project Implementation Plan

A cloud based VOIP solution really reduces the complexity of a VOIP implementation for CP and is a primary compelling driver of such an alternative. First, CP would want to estimate the number of calls and current calls it expects to use through the VOIP system. This data would drive plan selection (international calling plans) and ensure the proper connectivity to each location for supporting such a solution.

Second, CP would complete a technical assessment of internal network architecture. For example, they would need to ensure that all core switches and routers allow for QoS, that the necessary firewall ports are open to allow for the UDP traffic of the phone vendor, and ensure that Internet connectivity to each site can support the VOIP traffic. Most cloud vendors suggest an average of at least 64Kbps per call (up/down), which can then be multiplied by the number of expected concurrent calls to create a baseline minimum connectivity standard.

After the planning stage of the implementation is complete, CP could leverage the Cloud providers web based control panel to set up each extension, VM, user etc. for each phone that it will deploy (don’t need to configure the physical phone itself).

When the phones arrive onsite to the user, the user can simply plug the phone into the network and it will automatically configure itself with a DHCP issued device and contact the Cloud providers website to download it’s assigned profile. This will reduce the need for IT staff to physically support the VoIP rollout at each location, saving CP additional implementation funds.

4.2 Cloud VoIP Solution Disaster Recovery

In the event of a Disaster, a cloud solution provides CP with a number of options.

Since the cloud VoIP solution is offsite, it is inherently removed from any disasters that impact the continuity of CP directly, as access to the phone system requires only an acceptable Internet connection. Should an event occur that impacts CP operations in any way, calls to CP would still occur since they route through the cloud providers network. Since most cloud providers allow for roll over functionality to mobile phones, calls could still route to the intended recipient or at worse case, go to voice mail.

Additionally, most cloud providers have “soft phones” that enable calls to be made and received – using the same number/extension — from software installed on their computer or smart phone device. So in an event of a disaster, we would develop a number of procedures that accommodate ongoing use of the cloud phone system in a variety of different ways, assuming a user has any acceptable Internet connection.

While a cloud solution would inherently offset most of the technical disaster recovery needs, it would expose CP to the disaster recovery solution of the provider. Therefore, when we recommend a specific vendor, we will ensure proper due-diligence is undertaken on the cloud vendors strategy, process and procedures.

4.3 Cloud VoIP Solution Failover Remediation

From a technical perspective, a cloud solution means that CP simply has to consider redundancy in its local area networks and Internet connections, as we would offset the technical failover mechanisms to the chosen cloud provider. In this sense, we can use the redundancy we’ve already built into the existing LAN and WAN to hedge against issues getting to the cloud phone provider.

In the event a failure occurs to the network or Internet, CP phones systems would technically not go down because all calls route through the cloud provider. As mentioned in the Disaster Recovery section, calls could automatically reroute to mobile devices or “soft” phones to reach the intended recipient or extension.

However, since CP effectively would be outsourcing their VoIP solution, CP would also outsource the failover remediation to the selected cloud provider and would be exposed to any outage that the Cloud provider may have. Industry leading Cloud providers have failover remediation solutions and processes internally which would be more redundant and resilient than what CP could likely afford, however outages do occur and we would be beholden to the cloud provider for resolution if a system outage were to occur.

4.4 Cloud VoIP Solution Vendors, Price, SLAs and Value

There are a growing number of cloud based / hosted PBX options available for CP to choose from – most of which cater to the mid-market. Industry leaders include RingCentral.com, Comcast, Verizon, XO communications, Vonage (business solutions), 8×8 and Grasshopper.com.

Most cloud solutions offer their service at a per-month; per-user rate and prices can range from about $19.99 – $49.99 per month, which typically includes unlimited minutes, and some allocation of international long distance minutes.

Unfortunately, Service Level Agreements (SLA) are typically not offered for any solutions that require communication to the providers through the public Internet, as QoS cannot be ensured. However, if CP can enable a MPLS network to a selected cloud provider, typically cloud providers will negotiate SLA’s to provide some guarantees.

5. RECOMMEDATION # 2: On-Premise VoIP Solution

Verizon was chosen as the network provider due to its substantial global footprint for both MPLS and SIP Trunking connectivity. Verizon also receives high marks for its Voice over IP Service portfolio from industry critics such as Gartner. Verizon provides full VoIP services (i.e., local, long distance, and international) to North America and most of Europe.  Countries that are not covered by full VoIP services will utilize a hybrid approach that employs 3rd party voice services to fill in the gaps in services.  In all cases, MPLS connectivity will allow each country to realize cost savings by directing intra-company calls across the MPLS network.

In the site listing below, the sites in Red have full VoIP services from Verizon.  For the blue sites, Verizon is able to provide international and intra-company VoIP services.  The customer will need to order local services via ISDN PRI or some other PSTN connectivity via a third party provider.  The purple sites have MPLS connectivity only.  The customer will need to order local, long distance, and international service via a third party provider.  The customer’s dial plan will be configured such that intra-company calls will be sent over the MPLS connection directly to the called site allowing them to still realize cost savings through bypassing the tolls for those international calls.

5.1 On-Premise VoIP Solution Project Implementation Plan (for hub sites)

The CP will employ a Cisco VoIP solution for call processing utilizing the Cisco Unified Communications Manager (CUCM) to support a multi-site, distributed call processing deployment with a group of call processing servers operating in a cluster to form a single logical call processing server.  Two hub sites will provide call signaling and application services to the network.  A hub site in Washington, DC will provide direct support to the locations within North and South America while another hub site in London, UK will support the locations within Europe, Asia, the Middle East, and Africa.  Both the Washington, DC and London, UK hubs will have a CUCM Publisher server for CUCM configuration and two additional Subscriber servers for primary and backup call signaling and application services.    Cisco Unified Border Element (CUBE) routers will provide the Session Border Control (SBC) functionality between the CP and the Verizon SIP Trunking network provided over MPLS dedicated circuits. The following figure shows the different offices that would use this solution:

VoIP Figure 3 - On-Premise VoIP Solution Locations

Figure 3: On-Premise VoIP Solution Locations

5.2 On-Premise VoIP Solution Disaster Recovery (for remote sites)

The remote Sales and Manufacturing offices will also have CUBE routers to terminate their Verizon SIP Trunking connections.   The routers will utilize Cisco’s Survivable Remote Site Telephony (SRST) feature that automatically detect the loss of call processing to the hub site’s CUCM and auto-configures the router to provide local call processing to the IP phones while network connectivity is restored either locally or to the hub site.  Each remote site will also be configured with two onboard Foreign-Exchange-Office (FXO) interfaces for Plain Old Telephone Service (PSTN) lines to allow for emergency outbound dialing such as 911. The router will automatically redirect outbound calls to the FXO interfaces until connectivity is restored to the hub CUCM servers at which time any new calls will again be sent over the WAN link.

For countries that have limited or partial SIP Trunking service with Verizon a hybrid approach is required whereby the customer procures PSTN service via a 3rd party local service and routes either or both intra-company and international voice calls across Verizon’s MPLS SIP Trunking network.

5.3 On-Premise Solution Failover Remediation (for networking size)

To conserve bandwidth the CP will utilize the compressed G.729a codec that requires 33kbps per call compared to the G.711 codec that requires 83 kbps per call.  Verizon’s SLA includes a MOS score of 4.0 for G.729a traffic which supports high quality voice.

The sales offices and major locations all have 15-20 people.  The network was sized to support concurrent calls for half the users at any given site.  10 calls multiplied by 33 kbps equals 330 kbps of required bandwidth per site resulting in a fractional T1 or E1 circuit with room to grow.  Although the manufacturing sites have large numbers of employees, very few of them will have their own phone or actually spend much time on the phone so the concurrent call requirements will be very similar to the sales offices and major locations.  For resiliency, the Hub sites will have two diversely routed T1 / E1 circuits.  This will allow the hubs to have alternate network paths for their own SIP Trunking connectivity and phone service as well as providing backup paths for the remote sites that depend on the hubs for their signaling and call control.

5.4 On-Premise VoIP Solution Network Changes and Design

For the initial VoIP rollout, the CP will be converging voice and data on the LAN network.  The new MPLS connections will be dedicated solely to Voice traffic.  A separate WAN network is already in place for data traffic.  To support voice and data convergence on the LAN, QoS configurations will be implemented to prioritize time sensitive voice traffic over data traffic.  QoS will also be configured on the WAN network to prioritize voice traffic across the MPLS backbone.  CP would use its LAN convergence experience as a stepping-stone to eventual full convergence over both the LAN and WAN.

VoIP Figure 4 - On-Premise VoIP Solution Network Design

Figure 4: On-Premise VoIP Solution Network Design

6. CONCLUSION

To summarize, the two VoIP solutions consolidate the voice and data networks of CP in order to provide more bandwidth for the exchange of the CAD models. The benefit of looking at two solutions is to see the choices that are available to us. This analysis also indicates that with any solution that we proceed with we have to take into consideration risks that revolve around people, processes and technologies. These risks include change management, circumventing of the new processes, obsoleteness of the technologies and the vendors going out of business. Taking into account all these risks and the long-term benefits for the organization, we recommend the On-premise VoIP solution for Citadel Plastics.

Processing…
Success! You're on the list.

AlohaNet

As an Enterprise Architect, I help organizations transform through people, processes, and technologies where I have had my fair share of dealing with technology infrastructure issues. However, in dealing with technology infrastructure, I have not paid that much attention to the underlying networks since I have always assumed that they will be there and always available. But after reading this article about Alohanet, I have come to realize that what we take for granted today is the result of many years of problem-solving activities that involved universities, military and commercial organizations. Thus, I now have a greater appreciation for the importance of networks for individuals and organizations.

Typically in conversations with others, I have often indicated that the Internet came from ARPANET, which was a military-funded project. While this is correct but it diminishes the role the University of Hawaii played in laying the foundations of the Internet before it was even funded by the military.  Prior to this article, I was not aware of the University of Hawaii’s contributions. What is interesting is that the Internet started with some humble beginnings in the 1960s where some people in the university were just trying to figure out how to share resources across the various university buildings that were spread across the various Hawaiian Islands. To think that the foundations of the Internet came from islands that were created by volcanic activity in the middle of the ocean millions of years ago is truly awe-inspiring.

The author does a great job of beginning with a story and then getting into the technical details of network communications. There are a couple of interesting points that the author talks about which I will relay below:

Firstly, the original goal of the ALOHA system was not to create this robust network of networks (i.e., the Internet) that every individual and organization can use but it was simply to see if radio communications could be used as opposed to conventional wire communications when needed. Interestingly, this was uncharted territory even for the experts who at the beginning did not realize the importance of radio broadcast channels versus multiple access capabilities and conventional point-to-point wire channels. In hindsight, going with radio broadcast channels was the right choice because otherwise a point-to-point wire channel would have cost too much from an infrastructure standpoint and would not scale as rapidly due to the time it would take to establish various point-to-point channels. In my experience, technologies that do scale quickly have three main ingredients (1) appropriate funding (2) a collaborative environment and (3) the level of too much technical sophistication is hidden from the end-users. This is how I see the evolution of networks from its resource sharing to now the use of the Internet.

Secondly, the author refers to the “usual software delays” even when developing network protocols. To me, this seems to indicate that software delays are nothing new and although we pay a lot of attention to them today, they have been the ‘norm’ for a while. From a broader lens, this comment also illustrates the reliance on networking on the underlying software that is used to handle data packets. From this, we can decipher that the relationship between network and networking software is a very close one.

Thirdly, the international efforts that involved research facilities and universities to show the potential of data networks are noteworthy. It shows the combined resolve of humans to test and solve problems collaboratively. I am not sure if this still happens today where instead of being protectionists about technologies, it is used by and for everyone. From a broader perspective, this also means that the military, research facilities, and universities were looking at the exchange of data through broadcasted data packets going beyond just the national boundaries.

Fourthly, the advent of the microprocessors and its incorporation into terminal control was an important achievement that opened up the doors for commercial usage. One thing led to another, first a paper, then a book, then looking at various mediums for packet broadcasts and then the tipping point where Motorola introduced its unslotted ALOHA channel in the personal computer. Interestingly, all of these events happened in a decade and thus opened up new possibilities for not only the people involved but for everyone else.

Lastly, the alignment of strategy and theoretical realities is I believe to be the key to all of what was going on. It seems like the process of learning went both ways where strategy learned from execution that fed back into strategy. In today’s world, this alignment is difficult to come by for many reasons. From a problem-solving perspective, this misalignment can result in delays, overruns, and frustrations. I am sure the data packet broadcast journey had its own issues as well but that did not deter people from keeping the eye on the big picture. Where would we have been today if the misalignment continued and there was no resolution? I would argue that the Internet would still be developed, networks would be incrementally improved but perhaps the Internet revolution would at least be delayed.

In conclusion, this article showcases the human resolve to pile through uncharted technical territories, figuring things out as they went along and the resolve to accomplish the desired objectives. It also illustrates the happenstance of putting the ALOHA system on the list for Interface Message Processors (IMP). There were numerous moving parts but at the end sending of data packets through the broadcast channel was a success that paved the way for future innovations.

Processing…
Success! You're on the list.
%d bloggers like this: