Custom development in software industry has come a long way. Today more and more companies are looking for off-the-shelf solutions. It goes without saying that such solutions may not meet all the needs of a company. But it certainly is quick to implement, cost effective, and scalable. Even today we come across many cases, where custom solutions remain inevitable for a client. There may be couple of reasons including specific application niche, where no readymade solutions exists.
This emerging situation brings a new set of challenges for Development, Project Management etc. Let us take a case of winning a project. Earlier we devoted our time to accuracy of estimates. Various methods evolved like Function Point Analysis, Work Breakdown Structure etc. The software vendor matured their practices and came up with robust estimation and sizing model. The accurate sizing is necessary, but does not guarantee success. Such failure creates a tremendous disconnect in the organization’s functional entities like sales, solutions, operations etc.
The myth here is sizing what? Sizing is appropriate, only if solutions approach is similar and cost effective. I have seen companies losing business to competition, who are Agile, Innovative and utilize its software asset base to provide cost effective solution to the client. Therefore as a management, as a Project Manager, as an architect we have to think differently. Some of the experience I would like to share here:
1. The Brick and Mortar age is gone. Building everything from scratch will not work in an era of off-the-shelf products. Avoid Brick & Mortar based solutions as far as possible.
2. Reusability, Replication are some of the mantras for success. We have been talking this for ages. Putting this in practice requires some strategic thinking and direction. These directions will come from focus and scale. Companies touching everything are not able to compete with companies, who are specialists in a domain, have more opportunities in the same area. They leverage their asset base better and come up with a more cost effective solution despite having rate disadvantage. What I am referring here is even a local western country vendor can easily compete with an offshore vendor from a low cost location utilizing the above principles.
3. Building Tools and Templates: Tools and templates help in accelerating development. Their building efforts can be amortized over a large number of projects.
4. Technology developers have come up with language based frameworks, design patterns etc. Their efficient usage is paramount to stay competitive.
5. It is the time to look at innovative approach to solutioning. The old industrial engineering approach is very handy. Some of the principles from manufacturing like Kaizen and Lean manufacturing can be applied in software situations as well.
It is easier to talk about these principles than implementing them. This requires a strategic direction for the company to start with. Once you have strategic direction, you can start building assets and implement some of the thoughts mentioned above. This requires replicable work opportunities, economies of scale. The replicable opportunity has to be in domain and technology areas. One can not compete, if you are a generalist. Stay focused on a technology, specific domain, create scale, utilize five principles mentioned above, you will succeed.
Darwin’s principle for Evolution holds perfectly fine in this environment - “Struggle for existence and survival of the fittest”. This is a time of non-linear thinking. With smaller increase in resource base we need to deliver non-linear output. Do more with less- is the final goal. This is a paradigm shift. If you do it correctly, you could be more profitable and provide a very cost effective solution to your customer.
Thursday, June 24, 2010
Thursday, June 17, 2010
Using Theory of Constraint to manage large Software Project
“The Goal” – a book written by Eliyahu M. Goldratt & Jeff Cox is reaching closer to 8 million copy sale. This is incredible. I am not sure how many business books have reached such a level. The sale clearly indicates something inherently great about this book. I found this book written in a story style. The concept of Theory Of Constraint ( TOC) is explained in a very simple language taking simple situations. I was thinking there could not have been better way to explain TOC concepts than the way explained by the authors. The examples are taken from manufacturing industry.
I am reproducing a short extract from the Preface of the book.
“Companies, even the relatively small ones, are incredibly complex. When facing complex systems, it is so easy to fall into the trap of looking for complicated solutions. Still, we all know that complicated solutions do not work! Instead, what we should do is realize that every complex system is based on inherent simplicity. The way to rapidly improve our systems is to search out that inherent simplicity and capitalize on it.”
I am sold on principle of simplicity. When I finished reading this book, I was struggling with a project of 50 persons with an estimated effort of 20 man years. The project was delayed. The completion date appeared like a moving goal post. It was challenging. Although TOC principles are primarily applicable to Manufacturing type of systems, I found it very relevant to apply the principle to solve the complex situation we were in.
We in software business talk about life cycles like Waterfall, Agile, Iterative etc. The principle of TOC can work in conjunction with any one of them. Let us look at the brief case study.
On 1st October 2009, we saw this project getting out of our hand. We brainstormed and brought major changes the way we managed the project. We looked at each element of project team. The team structure can be depicted as sequential block of related activities. Each block represented a processing station.
We noticed a lot of backlog, late working and low motivation at Integration station. UAT team, which was a Client team also, had a similar problem. The natural outcome was to define Goals for the project and structure the team around the defined goals. The goals were agreed as:
1. Improving throughput through the system.
2. Ensuring high quality of deliverables. It was thought to reduce UAT defects at least by 67%.
Besides the above goals, a working discipline was agreed to support the stated goals. Every station will work towards ensuring the utilization of critical stations ( Integration & Acceptance in this case) and the work allocation will be done to assist faster throughput and not ensuring higher utilization of the members in different stations. The work was structured in batches and batch movement across various stations was monitored and managed on above principles.
Implementing this discipline in the team faced resistance from both internal team members and the Client. A series of discussion were held with stakeholders and finally a concurrence from the customer was obtained for trying this out. We were confident to demonstrate the benefits within a week or so. This was used to make Client agree to watch the process for a fortnight.
After agreeing with the basic rules of the game, all team worked to support the goals. We carried through system of daily stand up call concentrating on three questions - what got done in last 24 hours, what are the plans for next 24 hours and what are the bottlenecks. We worked to solve the bottleneck.
The first batch with the new process was submitted on schedule. The result of the UAT was very encouraging and changed the morale of the team. People had renewed level of confidence. They were upbeat with the results. This changed the whole game. The theory was followed in word and spirit and in three months time the project got completed and accepted by the customer. Although we had a bad start, the excellent finish gave us excellent respectability from the client and great sense of achievement for all of us.
Software industry is gradually maturing. Time is getting apt for applying proven principles of manufacturing ( Factory Management Principles) in this maturing industry. TOC is one of them and I am upbeat with my experience on solving large complex project issues using this theory. It has worked well and can work with any life cycle selected for the project execution.
I am reproducing a short extract from the Preface of the book.
“Companies, even the relatively small ones, are incredibly complex. When facing complex systems, it is so easy to fall into the trap of looking for complicated solutions. Still, we all know that complicated solutions do not work! Instead, what we should do is realize that every complex system is based on inherent simplicity. The way to rapidly improve our systems is to search out that inherent simplicity and capitalize on it.”
I am sold on principle of simplicity. When I finished reading this book, I was struggling with a project of 50 persons with an estimated effort of 20 man years. The project was delayed. The completion date appeared like a moving goal post. It was challenging. Although TOC principles are primarily applicable to Manufacturing type of systems, I found it very relevant to apply the principle to solve the complex situation we were in.
We in software business talk about life cycles like Waterfall, Agile, Iterative etc. The principle of TOC can work in conjunction with any one of them. Let us look at the brief case study.
On 1st October 2009, we saw this project getting out of our hand. We brainstormed and brought major changes the way we managed the project. We looked at each element of project team. The team structure can be depicted as sequential block of related activities. Each block represented a processing station.
We noticed a lot of backlog, late working and low motivation at Integration station. UAT team, which was a Client team also, had a similar problem. The natural outcome was to define Goals for the project and structure the team around the defined goals. The goals were agreed as:
1. Improving throughput through the system.
2. Ensuring high quality of deliverables. It was thought to reduce UAT defects at least by 67%.
Besides the above goals, a working discipline was agreed to support the stated goals. Every station will work towards ensuring the utilization of critical stations ( Integration & Acceptance in this case) and the work allocation will be done to assist faster throughput and not ensuring higher utilization of the members in different stations. The work was structured in batches and batch movement across various stations was monitored and managed on above principles.
Implementing this discipline in the team faced resistance from both internal team members and the Client. A series of discussion were held with stakeholders and finally a concurrence from the customer was obtained for trying this out. We were confident to demonstrate the benefits within a week or so. This was used to make Client agree to watch the process for a fortnight.
After agreeing with the basic rules of the game, all team worked to support the goals. We carried through system of daily stand up call concentrating on three questions - what got done in last 24 hours, what are the plans for next 24 hours and what are the bottlenecks. We worked to solve the bottleneck.
The first batch with the new process was submitted on schedule. The result of the UAT was very encouraging and changed the morale of the team. People had renewed level of confidence. They were upbeat with the results. This changed the whole game. The theory was followed in word and spirit and in three months time the project got completed and accepted by the customer. Although we had a bad start, the excellent finish gave us excellent respectability from the client and great sense of achievement for all of us.
Software industry is gradually maturing. Time is getting apt for applying proven principles of manufacturing ( Factory Management Principles) in this maturing industry. TOC is one of them and I am upbeat with my experience on solving large complex project issues using this theory. It has worked well and can work with any life cycle selected for the project execution.
Thursday, June 10, 2010
A heuristic approach to Application Portfolio Management
1. INTRODUCTION
Globalization and the dilution of trade boundaries have forced industries to evaluate various options for cost-effective operations. Industries now face the challenge of channeling resources to meet organizational objectives in continuously changing environments. Information Technology plays a major role in such a dynamic business environment. However, with the advent of new technological tools and continuous advancement, the IT environment has become very complex.
Chief Information Officers (CIOs) regularly add new assets to their IT portfolio, making the portfolio more and more complex. In addition, they face the task of aligning business needs and IT assets. Cost consideration has becoming increasingly important. The challenge is to do more for less.
In such a dynamic situation, the management of the IT Portfolio has become an increasingly important and crucial task. An increasing number of CIOs are pursuing IT portfolio management as a tool to prioritize investment decisions, decide the location of various activities, evaluate the various assets with the value it delivers, and more.
Each asset in the portfolio is evaluated against parameters like cost of procurement, cost of managing, and indeed the cost of replacing the same. “Total cost of ownership” is seen in conjunction with the value delivered by the asset. Optimal management of assets leads to minimal TCO.
A study by Gartner states that, “Approximately 30 percent of the total cost of ownership during the life of an application is for its maintenance and management.” One can achieve a significant degree of cost reduction by maneuvering the costs associated with maintenance and management.
2. WHAT IS APPLICATION PORTFOLIO MANAGEMENT (APM)?
Portfolio management is the process of managing assets and investments in order to achieve desired organizational goals. The portfolio is a combination of assets that are expected to provide certain returns. It has risks associated with it. PM includes selecting a set of assets congruent with the set goal, managing the economic lifecycle of those assets, dynamically divesting and investing in different assets to optimize gains. In the IT context, the portfolio includes application software, hardware, infrastructure, resources, processes, and so on. While financial portfolio management has been in practice for many years, IT portfolio management is relatively new and gaining ground. An integrated framework that helps in investment, divestment, modifications, and movement of application assets has become a necessity to align business and IT goals of an enterprise.
Application Portfolio Management can be best described as:
• A "living program" that allows you to assess the applications in your portfolio, evaluate potential changes, and understand the risks and impact of these changes to the portfolio.
• A discipline and a set of tool that enables a CIO to respond to the pressures of managing an application portfolio.
• A framework helping in relating the total cost of ownership to revenue, identifying redundancies and gaps in current capabilities, pinpointing trouble spots, and highlighting opportunities to pursue sourcing alternatives.
3. WHY IS IT NEEDED?
An APM framework…
• Continuously monitors the environmental changes in business and keeps it optimal.
• Aligns business and IT objectives.
• Reduces portfolio complexity and creates a portfolio roadmap.
• Reduces the total cost of ownership.
4. AN APPROACH TO APM
The introduction of APM in an organization has to be phased balancing the costs, benefits, risks, and business objectives. The APM is a continuous process like a Financial Portfolio Management where the portfolio manager continuously watches the environmental changes and fine-tunes the portfolio for optimal gain.
The Application Portfolio Management necessarily requires a life cycle for effectiveness. The phases of the lifecycle are described hereunder:
4.1 Define Goals & Strategies
IT initiatives are meant to facilitate better business goal achievement. Business goals are defined for the overall organization. A top-down approach is recommended to create IT goals congruent with business goals. One of the options could be the use of Balance Score Card (BSC) developed by Kaplan and Norton. Once the IT goals are frozen, the same is broken into functional goals. This leads to a broad-level functional and technological implementation strategy.
Some key tasks are listed below:
• Identify relevant business Goals
• Identify IT goals and initiatives
• Map IT initiatives
• Cluster the applications on the basis of
o Functional
o Technology
• Evaluate alternatives strategy for clusters
• Finalize the strategy for each cluster
• Set goal for APM initiatives
4.2 Resource the initiative
Management sponsorship is a must to take the initiative forward. It is suggested that one must identify an owner for the initiative, get a budget approval, set improvement goals, and create a plan to complete the exercise. Some of the tasks during this phase are:
• Obtain management sponsorship
• Identify a lead to carry the initiative
• Get budget allocation
• Get Plan approval
4.3 Conduct an assessment
An analytical framework must be put in place to assess each application from the Maintain, Improve, and Retire perspective. The assessment is a combination of interviews with stakeholders and relevant data collection. The collected data is sanitized and analyzed using the framework. An index representing the ease of movement and value creation is established for each application. The set of applications are classified in different bands based on ease of movement, value creation, functional grouping, and technological grouping. A final sequence is arrived at by superimposing customer comfort level on the final analysis. The task sequence is listed hereunder:
• Portfolio analysis
• Model Building / Customization
• Analysis and computation
o Application Index
o Criticality, Volatility, Complexity indices
o Value Creation Index
• Evaluate applications and decide the strategy
4.3.1 Developing application indices
A detailed analysis in terms of criticality, complexity, value, cost, etc. is necessary to get an insight into the state of the application and conclude its future. Since the analysis is aimed at examining the application portfolio and improving the cost–value performance, an index is established to throw light on ease of movement.
Application index is a measure of moveability of the application. Higher the index, the more difficult it is to move. Developing the index helps in classifying applications in different bands and gets a relative assessment of application moveability.
4.3.2 Portfolio Analysis
Application portfolio analysis is the first step towards application outsourcing planning. It details:
• Application
• Technical Environment
• Functional Group
• Documentation Group
• Associated costs
The following table provides a high-level overview of the Portfolio Analysis phase:
Portfolio Analysis
Entry Criteria
1. Go ahead
Exit Criteria
1. Identification of application portfolio.
2. Determination of technical and functional grouping.
Methods and Tools
1. Data Collection—Forms/Questionnaire
2. Interviews
3. Data Summarization
Key Tasks
1. Establish a high-level plan.
2. Discuss the detailed approach.
3. Conduct interviews with application managers, key users, and IT directors.
4. Collect data from available records.
5. Identify functional group.
6. Identify technical grouping.
4.3.3 Model Building and customization
This phase helps in customizing the framework to suit specific requirements of the customer. The diagram depicts the base model, which is customized during this phase.
The following table provides a brief overview of Model Building and customization:
Model Building and customization
Entry Criteria
1. Attribute-wise application inventory.
2. Technological and functional grouping.
Exit Criteria
1. Definition of application attributes and its factors.
2. Model for application index.
Methods and Tools
1. Sum of position digit method.
Key Tasks
1. Identify significant factors affecting application outsourcing.
2. Define each factor as an index.
3. Assign weight to each application factor.
4.3.4 Analysis and computation
During this phase, significant factors are grouped together to develop various application indices. The data is checked/examined for range behavior, indices are computed, and normalization of indices is done to bring parity amongst the indices.
Analysis and computation
Entry Criteria
1. Application indices definition
Exit Criteria
1. Application index
2. Functional group
3. Technological group
Methods and Tools
1. Statistical analysis (distribution, range analysis)
2. Delphi techniques
Key Tasks
1. Establish the indices.
2. Normalize the application indices.
3. Compute composite application index.
4. Map Technological and functional grouping to each application.
5. Compute Documentation index.
Factors contributing to each index are identified along with the value range. Data cleansing and analysis is done to arrive at normalized indices. After developing various indices, the Delphi method is used to arrive at the Application Index and the Value Creation Index.
4.3.5 Studying Indices
The analysis of the indices gives real power to the framework. It helps to view the application portfolio from different angles and to arrive at conclusions. The following sections provide an overview of the different perspectives:
(i) Cost Vs Criticality
The criticality index for each application is a measure of the application’s criticality from the business standpoint. The quadrants of the Ansoff Matrix depict four possible situations.
Quadrant I: Low criticality and low cost: These applications will require further analysis before any recommendations are made.
Quadrant II: High on criticality and low on cost: These applications can be evaluated for retention.
Quadrant III: Low on criticality and high on cost: These applications are potential candidates for replacement.
Quadrant IV: High criticality and high cost: These applications should be examined for cost performance improvement.
(ii) Volatility Vs Criticality
Critical applications are the lifeline of an organization. The role of the CIO is to provide stable applications to the organization. The Ansoff Matrix of Volatility-Criticality helps to identify the application/application group to improve stability. This study leads to further analysis in terms of design improvement, platform rationalization, re-engineering, and so on.
Quadrant I: Low on criticality and low on volatility: These applications require more analysis for action.
Quadrant II: High on criticality and low on volatility: If there are no other compelling factors, these applications can be continued on an as-is basis. For example, if some applications from this group have the potential for savings by replacing or off-shoring them, they can be moved, or else they can be continued in their present state.
Quadrant III: Low on criticality and high on volatility: We do not have any application falling in this quadrant. Normally, these applications can be evaluated for retiring.
Quadrant IV: High on criticality and high on volatility: These applications need to be improved. The improvement should aim to reduce volatility. This could be possible by replacing or fixing the cause of volatility.
(iii) Analyzing Functional Group Complexity
The various functional group systems must be aligned with business objectives. The Complexity Index helps in identifying the most complex functional group using the Pareto analysis. The Complexity Index of each application is used to derive the Group Complexity Index through the Pareto analysis. This analysis provides an insight for group-wise BPR and re-engineering to reduce system complexity.
(iv) Gas Guzzler
Organizations have limited dollars to support application portfolios. Interestingly, all applications in the portfolio do not consume equal amount of dollars and follow the 20-80 rule. This analysis sheds light on what needs attention, what needs better control, and/or what applications need to be moved reduce overall maintenance cost.
4.4 Implement
Implementation is a crucial phase of APA frame work. A plan is developed and concurrence is obtained from stakeholders. It is recommended to have an implementation team, proper management oversight for implementation. Some of the tasks are:
• Finalize recommendations
• Discuss with stakeholders
• Present to management and obtain their concurrence
• Implement
4.5 Monitor & Control
Portfolio analysis is an ongoing work. Once the recommendations are implemented, there is a need to have periodic evaluation of the portfolio assets. It is recommended to align the periodic portfolio evaluation in the CIO’s KRA for success. Some select tasks shall be:
• Define appropriate Metrics
• Develop a score card
• Collect info, evaluate & monitor
• Realign the overall APM process
5. CONCLUSION
Application Portfolio Management is an important concept for a CIO. It is very helpful in integrating business goals with IT initiatives. There are tools available to practice in different space. However, without waiting for application of tools a beginning can be made to introduce the process and get a better understanding before one invests in the tool.
The heuristic model discussed in this paper is a great tool which can be practiced in an organization. A beginning can be made without any additional investment in APM tools.
There are always applications that consume a significant portion of support resources. An APM study revealed that the top 20 applications in an organization out of 120 accounted for 88% hours spent on maintenance. A differential strategy was suggested for these applications, which included better supervision, reassignment of key resources, and aligning some of the applications with the long-term road map of the enterprise. This helped in reducing the overall maintenance time.
The analysis can also help in identifying applications in different buckets like Maintain, Continue, Improve and Retire.
Globalization and the dilution of trade boundaries have forced industries to evaluate various options for cost-effective operations. Industries now face the challenge of channeling resources to meet organizational objectives in continuously changing environments. Information Technology plays a major role in such a dynamic business environment. However, with the advent of new technological tools and continuous advancement, the IT environment has become very complex.
Chief Information Officers (CIOs) regularly add new assets to their IT portfolio, making the portfolio more and more complex. In addition, they face the task of aligning business needs and IT assets. Cost consideration has becoming increasingly important. The challenge is to do more for less.
In such a dynamic situation, the management of the IT Portfolio has become an increasingly important and crucial task. An increasing number of CIOs are pursuing IT portfolio management as a tool to prioritize investment decisions, decide the location of various activities, evaluate the various assets with the value it delivers, and more.
Each asset in the portfolio is evaluated against parameters like cost of procurement, cost of managing, and indeed the cost of replacing the same. “Total cost of ownership” is seen in conjunction with the value delivered by the asset. Optimal management of assets leads to minimal TCO.
A study by Gartner states that, “Approximately 30 percent of the total cost of ownership during the life of an application is for its maintenance and management.” One can achieve a significant degree of cost reduction by maneuvering the costs associated with maintenance and management.
2. WHAT IS APPLICATION PORTFOLIO MANAGEMENT (APM)?
Portfolio management is the process of managing assets and investments in order to achieve desired organizational goals. The portfolio is a combination of assets that are expected to provide certain returns. It has risks associated with it. PM includes selecting a set of assets congruent with the set goal, managing the economic lifecycle of those assets, dynamically divesting and investing in different assets to optimize gains. In the IT context, the portfolio includes application software, hardware, infrastructure, resources, processes, and so on. While financial portfolio management has been in practice for many years, IT portfolio management is relatively new and gaining ground. An integrated framework that helps in investment, divestment, modifications, and movement of application assets has become a necessity to align business and IT goals of an enterprise.
Application Portfolio Management can be best described as:
• A "living program" that allows you to assess the applications in your portfolio, evaluate potential changes, and understand the risks and impact of these changes to the portfolio.
• A discipline and a set of tool that enables a CIO to respond to the pressures of managing an application portfolio.
• A framework helping in relating the total cost of ownership to revenue, identifying redundancies and gaps in current capabilities, pinpointing trouble spots, and highlighting opportunities to pursue sourcing alternatives.
3. WHY IS IT NEEDED?
An APM framework…
• Continuously monitors the environmental changes in business and keeps it optimal.
• Aligns business and IT objectives.
• Reduces portfolio complexity and creates a portfolio roadmap.
• Reduces the total cost of ownership.
4. AN APPROACH TO APM
The introduction of APM in an organization has to be phased balancing the costs, benefits, risks, and business objectives. The APM is a continuous process like a Financial Portfolio Management where the portfolio manager continuously watches the environmental changes and fine-tunes the portfolio for optimal gain.
The Application Portfolio Management necessarily requires a life cycle for effectiveness. The phases of the lifecycle are described hereunder:
4.1 Define Goals & Strategies
IT initiatives are meant to facilitate better business goal achievement. Business goals are defined for the overall organization. A top-down approach is recommended to create IT goals congruent with business goals. One of the options could be the use of Balance Score Card (BSC) developed by Kaplan and Norton. Once the IT goals are frozen, the same is broken into functional goals. This leads to a broad-level functional and technological implementation strategy.
Some key tasks are listed below:
• Identify relevant business Goals
• Identify IT goals and initiatives
• Map IT initiatives
• Cluster the applications on the basis of
o Functional
o Technology
• Evaluate alternatives strategy for clusters
• Finalize the strategy for each cluster
• Set goal for APM initiatives
4.2 Resource the initiative
Management sponsorship is a must to take the initiative forward. It is suggested that one must identify an owner for the initiative, get a budget approval, set improvement goals, and create a plan to complete the exercise. Some of the tasks during this phase are:
• Obtain management sponsorship
• Identify a lead to carry the initiative
• Get budget allocation
• Get Plan approval
4.3 Conduct an assessment
An analytical framework must be put in place to assess each application from the Maintain, Improve, and Retire perspective. The assessment is a combination of interviews with stakeholders and relevant data collection. The collected data is sanitized and analyzed using the framework. An index representing the ease of movement and value creation is established for each application. The set of applications are classified in different bands based on ease of movement, value creation, functional grouping, and technological grouping. A final sequence is arrived at by superimposing customer comfort level on the final analysis. The task sequence is listed hereunder:
• Portfolio analysis
• Model Building / Customization
• Analysis and computation
o Application Index
o Criticality, Volatility, Complexity indices
o Value Creation Index
• Evaluate applications and decide the strategy
4.3.1 Developing application indices
A detailed analysis in terms of criticality, complexity, value, cost, etc. is necessary to get an insight into the state of the application and conclude its future. Since the analysis is aimed at examining the application portfolio and improving the cost–value performance, an index is established to throw light on ease of movement.
Application index is a measure of moveability of the application. Higher the index, the more difficult it is to move. Developing the index helps in classifying applications in different bands and gets a relative assessment of application moveability.
4.3.2 Portfolio Analysis
Application portfolio analysis is the first step towards application outsourcing planning. It details:
• Application
• Technical Environment
• Functional Group
• Documentation Group
• Associated costs
The following table provides a high-level overview of the Portfolio Analysis phase:
Portfolio Analysis
Entry Criteria
1. Go ahead
Exit Criteria
1. Identification of application portfolio.
2. Determination of technical and functional grouping.
Methods and Tools
1. Data Collection—Forms/Questionnaire
2. Interviews
3. Data Summarization
Key Tasks
1. Establish a high-level plan.
2. Discuss the detailed approach.
3. Conduct interviews with application managers, key users, and IT directors.
4. Collect data from available records.
5. Identify functional group.
6. Identify technical grouping.
4.3.3 Model Building and customization
This phase helps in customizing the framework to suit specific requirements of the customer. The diagram depicts the base model, which is customized during this phase.
The following table provides a brief overview of Model Building and customization:
Model Building and customization
Entry Criteria
1. Attribute-wise application inventory.
2. Technological and functional grouping.
Exit Criteria
1. Definition of application attributes and its factors.
2. Model for application index.
Methods and Tools
1. Sum of position digit method.
Key Tasks
1. Identify significant factors affecting application outsourcing.
2. Define each factor as an index.
3. Assign weight to each application factor.
4.3.4 Analysis and computation
During this phase, significant factors are grouped together to develop various application indices. The data is checked/examined for range behavior, indices are computed, and normalization of indices is done to bring parity amongst the indices.
Analysis and computation
Entry Criteria
1. Application indices definition
Exit Criteria
1. Application index
2. Functional group
3. Technological group
Methods and Tools
1. Statistical analysis (distribution, range analysis)
2. Delphi techniques
Key Tasks
1. Establish the indices.
2. Normalize the application indices.
3. Compute composite application index.
4. Map Technological and functional grouping to each application.
5. Compute Documentation index.
Factors contributing to each index are identified along with the value range. Data cleansing and analysis is done to arrive at normalized indices. After developing various indices, the Delphi method is used to arrive at the Application Index and the Value Creation Index.
4.3.5 Studying Indices
The analysis of the indices gives real power to the framework. It helps to view the application portfolio from different angles and to arrive at conclusions. The following sections provide an overview of the different perspectives:
(i) Cost Vs Criticality
The criticality index for each application is a measure of the application’s criticality from the business standpoint. The quadrants of the Ansoff Matrix depict four possible situations.
Quadrant I: Low criticality and low cost: These applications will require further analysis before any recommendations are made.
Quadrant II: High on criticality and low on cost: These applications can be evaluated for retention.
Quadrant III: Low on criticality and high on cost: These applications are potential candidates for replacement.
Quadrant IV: High criticality and high cost: These applications should be examined for cost performance improvement.
(ii) Volatility Vs Criticality
Critical applications are the lifeline of an organization. The role of the CIO is to provide stable applications to the organization. The Ansoff Matrix of Volatility-Criticality helps to identify the application/application group to improve stability. This study leads to further analysis in terms of design improvement, platform rationalization, re-engineering, and so on.
Quadrant I: Low on criticality and low on volatility: These applications require more analysis for action.
Quadrant II: High on criticality and low on volatility: If there are no other compelling factors, these applications can be continued on an as-is basis. For example, if some applications from this group have the potential for savings by replacing or off-shoring them, they can be moved, or else they can be continued in their present state.
Quadrant III: Low on criticality and high on volatility: We do not have any application falling in this quadrant. Normally, these applications can be evaluated for retiring.
Quadrant IV: High on criticality and high on volatility: These applications need to be improved. The improvement should aim to reduce volatility. This could be possible by replacing or fixing the cause of volatility.
(iii) Analyzing Functional Group Complexity
The various functional group systems must be aligned with business objectives. The Complexity Index helps in identifying the most complex functional group using the Pareto analysis. The Complexity Index of each application is used to derive the Group Complexity Index through the Pareto analysis. This analysis provides an insight for group-wise BPR and re-engineering to reduce system complexity.
(iv) Gas Guzzler
Organizations have limited dollars to support application portfolios. Interestingly, all applications in the portfolio do not consume equal amount of dollars and follow the 20-80 rule. This analysis sheds light on what needs attention, what needs better control, and/or what applications need to be moved reduce overall maintenance cost.
4.4 Implement
Implementation is a crucial phase of APA frame work. A plan is developed and concurrence is obtained from stakeholders. It is recommended to have an implementation team, proper management oversight for implementation. Some of the tasks are:
• Finalize recommendations
• Discuss with stakeholders
• Present to management and obtain their concurrence
• Implement
4.5 Monitor & Control
Portfolio analysis is an ongoing work. Once the recommendations are implemented, there is a need to have periodic evaluation of the portfolio assets. It is recommended to align the periodic portfolio evaluation in the CIO’s KRA for success. Some select tasks shall be:
• Define appropriate Metrics
• Develop a score card
• Collect info, evaluate & monitor
• Realign the overall APM process
5. CONCLUSION
Application Portfolio Management is an important concept for a CIO. It is very helpful in integrating business goals with IT initiatives. There are tools available to practice in different space. However, without waiting for application of tools a beginning can be made to introduce the process and get a better understanding before one invests in the tool.
The heuristic model discussed in this paper is a great tool which can be practiced in an organization. A beginning can be made without any additional investment in APM tools.
There are always applications that consume a significant portion of support resources. An APM study revealed that the top 20 applications in an organization out of 120 accounted for 88% hours spent on maintenance. A differential strategy was suggested for these applications, which included better supervision, reassignment of key resources, and aligning some of the applications with the long-term road map of the enterprise. This helped in reducing the overall maintenance time.
The analysis can also help in identifying applications in different buckets like Maintain, Continue, Improve and Retire.
Subscribe to:
Posts (Atom)