Assessing the status of a project

Following is my list of hints you can use to ensure your projects or the projects you are assigned to lead, are well managed and run:

  • Work plans – does the team operate under work plans that are based on a proven methodology and estimated with a proven estimator? First, is there a detailed work plan showing deliverables, tasks, estimates, assumptions, schedules, and resources you can review? If there is not, this needs to be addressed as a top priority. Projects that try to operate without a detailed work plan run large risks of not meeting client expectations, and ultimately, not delivering on time, on budget, and on scope. Also, as a part of the detailed work plans are the key milestones and dependencies identified, documented, and understood by the team.
  • Time reporting – does the team do weekly time tracking against the aforementioned detailed work plans? Once a detailed work plan is in place, the next step is to actively track work efforts against the work plans. Periodically time tracking against planned tasks and deliverables is a must to ensure the plan is being followed. If time spent is tracking correctly and plans are good, then we will have available metrics such as:
    • the Cost Performance Indicator (CPI), which reports the value of days earned in the planned against days burned (spent),
    • the Schedule Performance Indicator (SPI), which reports the value of scheduled days earned against calendar days burned (spent)

We will see in another article, that if either the CPI and/or the SPI are significantly under, or over, a value of 1.0, then you should be looking into the work plans, the estimates, the skills on the team, scope management, and/or the addition of unplanned tasks to the project. Continue reading Assessing the status of a project

Processes in Software Engineering

To reach high quality levels, when developing and maintaining software it’s needed to develop, maintain and improve processes, as well. Better work processes result in better work products, where  “ better work products ”  means enhanced features, improved quality, less rework, and easier modifications.

Here are the basic principles to follow:

  • Work processes must be designed with the same care used to design work products; work processes must be designed to satisfy process requirements and process constraints, fit the needs of individual projects, and make the work processes efficient and effective.
  • Work processes for each project should be derived from a process framework. A process framework is a generic process model that can be tailored to meet the needs of a variety of situations. The tailoring of a framework involves adding, deleting, and modifying elements to adapt the framework to the needs of particular projects.

Process design and process improvement result in shorter schedules, higher quality, lower costs.

We should however notice that process improvement seldom happens spontaneously: a positive ROI  (return on investment) requires an ongoing investment of time, effort, and resources

It is however important to stress on the second of the points above: processes must be designed and tailored to the specific needs and not taken and adopted from some best practices of other organizations, without an adequate analysis.

Process design is best accomplished by tailoring and adapting well known development process models and process frameworks, just as software design is best accomplished by tailoring and adapting well – known architectural styles and architectural frameworks.

The following general principles applies:

  • There are several well known and widely used software development process models, including Waterfall, Incremental – build, Evolutionary, Agile, and Spiral models.
  • There are various ways to obtain the needed software components; however  different ways of obtaining software components require different mechanism of planning, measurement, and control.
  • The development phases of a software project can be interleaved and iterated in various ways.
  • Iterative development processes: provide the advantages of
    • continuous integration,
    • iterative verification and validation of the evolving product,
    • frequent demonstrations of progress,
    • early detection of defects,
    • early warning of process  problems,
    • systematic incorporation of the inevitable rework that occurs in software development,
    • early delivery of subset capabilities (if desired).
  • Depending on the iterative development process used, the duration of iterations range from one day to one month.
  • Prototyping is a technique for gaining knowledge; it is not a development process.
  • Always remember that mechanisms of planning, measurement, and control used in a software project are strongly influenced by the development process used.
  • SEI, ISO, IEEE, and PMI, provide frameworks, standards, and guidelines relevant to software development process models

Exit Criteria

Every project phase has a beginning and an end.

This means that some criteria will need to be satisfied in order to start a phase and some other will need to be satisfied in order to mark the phase as completed.

Very often we care about the requirement to start (entry criteria), but we do not give enough attention to the requirements that mark the end of a phase (exit criteria), unless they were specified within the contract.

Not being able to respect or not fixing correct exit often causes problems to the later phases or, even worse, if many teams are involved: for example when the development team gives control to the maintenance team it is really important that the documentation respects strong requirements in term of quality, in case not, even the easiest change will imply the analysis of the source code, which can be a really time consuming practice.

Exit criteria do not need to be complex, nevertheless they should include at least the following:

  • Deliverables list
  • Required quality for the deliverables
  • List of deliverables that are going to be produced but not completed
  • List of what is not going to be done and/or completed

There are different ways to define, communicate and monitor exit criteria. What is important is that everthing is prepared at the beginning (ideally during the planning phase), kept as simple as possible and used in project control. Continue reading Exit Criteria

Communication Management Plan

When managing a project, the way formal communication are handled, is quite important.

In this article I’m going to present the characteristics that I believe should be specified when preparing a communication plan, for every document or any other communication media.

These characteristics will then be specified in the “communication matrix” which is usually attached to the communication plan.

  • Scope/Object of communication: In a project there is the need of periodical status report. Maybe it is needed to update on project policies or change management processes. Every formal communication that happen within a project should be regulated and listed in the communication plan.
  • Purpose: For each communication channel (i.e. document or meeting) you list in your plan, you need a brief explanation of the document’s or meeting’s purpose. You want to answer why the communication is needed and under what conditions.
  • Frequency: By writing down the expectations, you ensure that all stakeholders understand how often communication is needed. Defining when milestones are due is essential to the process because you can measure the accuracy of the cost and time baselines to date, and the overall project status.You may also want to set up conditional reporting to establish that when specified conditions are met, individuals should report accordingly.
  • Medium/Channel: It is possible to communicate via report, via e-mail, workflow tools or meeting. What is important is that we specify the format for the communication. Some times it is better a written document, some others a simple e-mail. There’s no right or wrong way to present information, but the preferences and reasons for the modality have to be documented in your plan. For example, you may request that your project team members complete a weekly status report of their assignments in a Microsoft Word form and e-mail it to you. But (and here’s the rub), at each project status meeting the team members should bring the Word document in hard copy so they can use it to verbally review their progress. To save your sanity, you have each team member submit status reports prior to the meeting so you have all the reports at the status meeting. And it’s all documented in your plan.
  • Audience: Every report or meeting will have its own audience. There is no sense in inviting to a meeting people not interested in it, as there is no sense to send a document to someone is not interested in receiving it and in improving it.
  • Duration: Sometimes the communication will happen periodically along the whole project, while in some other cases it could be limited to a period of time or even a single event. In the communication plan we should specify it.
  • Responsibility: One common misunderstanding is that the project manager is responsible for every piece of communication.That’s just nottrue. The project manager is responsible to ensure that communication takes place, but he can’t be responsible for the actual communicating. For example, if an expert is hired to take care of some aspects of the project, it is important that he communicates with the team leaders (and maybe not all). It will be the project manager responsibility to facilitate the communication and to verify that they will happen profitably, but the communications will be responsibility of the team leaders. Sometime it will be important to distinguish between the responsible for the production (e.g. team leaders) and the responsible for the scheduling (e.g. PMO).

 

Critical Chain Method

Critical Chain Method (CCM) concepts where first introduced by a paper written in 1984 by Goldratt (“The Goal” – ed. Gower). The main factor for the success of this ideas is that it finds solutions to the classical problems we encounter once we plan following the Critical Path Method (CPM) approach.

CCPM (critical chain Project Management) is basically a mix of the most recent best practices:

  • PMBOK: Plan & Control.
  • TOC (Theory of Constraints): Removal of bottleneck to solve system constraints.
  • Lean: Remove waste.
  • Six Sigma: Reduce any deviation from optimum solution.

The problems

Going back to CCM, the main problems it aims to solve are:

  • Over estimating

This is a problem that usually comes up when defining plans. In a few words, since:

  • estimation are always cut;
  • details, at the beginning of a project are not always clear;

tasks that have the biggest uncertainty are sistematically over-estimated. We create contingencies, to protect us from the fact that things we don’t know will go wrong. This process is then amplified because estimationa are presented to many stackeholders, at many levels, and often each level put its own contingency so that at the end the project duration (& cost) is usually over estimated.

  • Student Syndrome.

This happens in case of long projects or loose schedules: usually people will start working on a task not when planned but when the delivery time will be near. The reasons are linked both to the other issues we are presenting here (multitasking, parkinson law) and simply psychological: people will not start working till they will not fell the pressure.

  • Parkinson’s Law.

This is a notorious empirical law (Work expands so as to fill the time available for its completion) in Project Management.

In other words if you assign 15 day to accomplish a task that can be done in 10, then it will hardly happen that the work will be completed in 10 day, but macigally it will be completed in 15. Evidences of this law are well known to PMs.

  • Bad MultiTasking

When multitasking is not correctly managed, the result is wasting of time.

This is another well know issue in real projects: when you assign more that one task to a resource, most of the time you will get inefficiencies, because jumping from one to the other, instead of completing one and then the other is only making the most critical task going late.

  • Handling of early finish

The problem in my opinion has even a theoretical reason in the CPM theory. In a few words, even if we would be able to close a task in advance we wouldn’t be able to get the most out of this.

Indeed if the task is not on the critical path, closing it in advance doesn’t give for sure a direct advantage (maybe you get some resources free, but without planning you wouldn’t be able to use them) if on the other side the task is on the critical path, an early closing could not be so relevant (in the sens that we could have other tasks that become critical) and, taking the concept to the limit, we could need of replanning the project or part of it, finding maybe another critical path…

In any case CPM doesn’t give a natural way to handle early close in the project tasks, since it doesn’t plan it.

Continue reading Critical Chain Method

Cost in a Data Center

After a period of observation, in this small article I’m listing most of the costs that I found i had to manage in coping with a DataCenter.
  • Server costs (A): With this and all other hardware components, you’re  specifically interested in the total annual cost of ownership, which normally consists of the cost of hardware support plus some amortization cost for the purchase of the hardware.
  • Storage costs (B): In situations where a storage area network (SAN) or network attached store (NAS) is used for an application, a proportional cost over the whole SAN or NAS needs to be determined, including management and support cost for the hardware.
  • Network costs (C): This needs to be carefully considered because the fact that an application moves into the cloud does not necessarily mean that all the network traffic it generates disappears. For example, data may need to be pulled from the application’s database to be added to a data warehouse. Alternatively, when Web applications are moved into the cloud, corporate Internet bandwidth requirements may be reduced. Clearly, the ability to access external applications requires substantial bandwidth.
  • Backup and archive costs (D): The actual savings on backup costs depends on what the backup strategy will be when the application moves into the cloud. The same is true of archiving. Will all backup be done in the cloud? Will your organization still be required to back up a percentage of critical data?
  • Disaster recovery costs (E): In theory, the cloud service will have its own disaster recovery capabilities, so there may be a consequential savings on disaster recovery. However, you need to clearly understand what your cloud provider’s disaster recovery capability is. Not all cloud providers have the same definition of disaster recovery. IT management must determine the level of support the cloud provider will offer.
  • Data center infrastructure costs (F): A whole series of costs including electricity, floor space, cooling, building maintenance, and so on can’t easily be attributed to individual applications, but can usually be assigned on the basis of the floor space that the hardware running the application occupies. For that reason, try to calculate a floor space factor for every application.    For example, if your data center is only 40 percent full, the economics of putting lots of additional capacity into the cloud is not financially viable. However, if your data center is 90 percent full and has been expand-ing at 10 percent a year, you’ll run out of data center next year. At that point, you may have to build a data center that could cost as much as $5 million. The cloud will be a much more economical choice.
  • Platform costs (G): Some applications only run in specific operating environments — Windows, Linux, HP-UX, IBM zOS, and so on. The annual maintenance costs for the application operating environment need to be known and calculated as part of the overall costs.
  • Software maintenance costs (package software) (H): Normally this cost element is simple because it comes down to the software’s annual maintenance cost. However, it may be complicated if the software license is tied to processor pricing. The situation could be further complicated if the specific software license is part of a bundled deal.
  • Software maintenance costs (in-house software) (I): Such costs exist for all in-house software, but may not be broken out at an application level. For example, database licenses used across many different applications may be calculated at a corporate level. It may be necessary to allocate these database cost at a per-application level. There may also be these kinds of costs for packaged software if in-house components have been added or if integration components have been built to connect this application to other applications
  • Help desk support costs (J): It’s necessary to analyze all help desk calls at an application level to determine the contribution of an application (if any) to help desk activity. The support costs for some applications may be anomalous and may disappear with the movement into the cloud. Some applications require more support than others. Understanding the different support requirements is key to making the right decision on the cloud.
  • Operational support personnel costs (K): There is a whole set of day-to-day operational costs associated with running any application. Some are general costs that apply to every application, including staff support for everything from storage and archiving, to patch management and net-works and security. Some support tasks, however, may be particular to a given application, such as database tuning and performance management.
  • Infrastructure software costs (L): A whole set of infrastructure management software is in use in any installation, and it has an associated cost. For example, management software is typically used for many different applications and can’t easily be divided across specific applications.

Performance approach on z/OS application

In this article I will briefly present the method I follow when analyzing for optimization a z/OS application. I present this method here because I think that this could be useful even in system which are not mainframe. For lenght reason i couldn’t go deeply into details, (however on thesite you can find a  document on coding standards that is possible to use to obtain good performance).

Going back to the subject, the activity goes trough some phases.

Set-up Here we need to understand the expectation and, based on this to plan and prepare the work. Usually the main activities are:

  • Define scope of work
  • Define client expectation
  • Review existing tool and data
  • Define strategy of work
  • Define a plan to share with the client
  • Set-Up the work team

Measure Here we prepare all the tools, to measure the characteristics of the system, if possible we gather and organize already existent measures:

  • Define tools and sources for getting data
  • Gather data -This phase is often really complex on the practical side, because, even if sources and tool are well defined, often there is a lack in global vision and data are spread on excel sheet, local databases, central databases… The biggest part of the work is to be able to extract the most interesting data and to put them in an easy and readable way

We may have to dig into:

  • System Data. For example the CPU usage during the day. This category of data depends on the scope of work;
  • Data on On-Line usage (CICS, IMS). usually, at first, I first ask data on the CPU and response time for each transaction Then, once found the problem, it is possible to go in more detail with specific data;
  • Data on Batch (night processes). Here it is quite important the job sequence, their CPU consumption, their execution time and their I/O. Then it is important to obtain in a readable way the scheduling plan (i.e. the list of real dependencies).
  • Data on DB. Usually for a first screening I usually need: physical design, Plan_Table, lock logs and Master log (for DB2 systems);
  • Functional data. In my case, when I work for banks, I usually ask the number of daily transactions, the number of physical branches, the number of clients, of current accounts, of loans, of daily payments, the critical days and possibly the business growth plans

Continue reading Performance approach on z/OS application

Size matters

In my previous job I got the confirmation that exists a relevant variable within the process definition and implementation in one organization.

This variable is the size of the organization and the context in which it is introduced.

In a few word I got a relevant proof that a process never fits any size and moreover is working only in relation with other processes but It need to be tailored to the actual organization.

In my past assignment I had an awful experience trying to apply to our 300 server estate the same processes of the most bigger 3000 server estate.

This may seems obvious but when it comes to reality expecially in decentralized organization is hardly applied. So whenever you are going to implement a process, remember: size matters.

Priorities in Infrastructure management

I recently took over on the Infrastructure team, I had to admit, the complexity was much higher than expected. In any case I worked out some priorities, which I think may apply in any situation Here’s the list:

  • Asset management: You need to know where you sit. Assets are the basic blocks of anti IT infrastructure. A correct asset management is needed in order monitor, track and plan for the infrastructure. Once the list is clear and it is clear what software runs on a platform, and how various groups use it, then it will easy a service monitoring and a proactive approach towards the business. Needless to say asset management is vital for your finance department, as usually the highest part of the budget comes out from this voice.
  • Service monitoring: Once you know your asset, you need to monitor what’s happening at each client, as well as working on the tasks required to maintain the right level of service. A service manager is usually needed to perform this task, as it requires a full dedicated manager with both technical and soft skills, in order to handle critical situation with the costumers.
  • Change management: Activities in this process area involve managing and implementing all changes in applications and hardware. This area is the core of the daily work. A strong process need to be in place in order to handle every aspect linked to a change in production (documentation, service management, asset management…)
  • Security and compliance: In this area there are two families of activities the first is securing the whole IT asset against external threats (which means setting up a serious patch management and configuration management, as well working in strict contact with the Information Security dept) and the second setting up a strong authentication process in order to manage how users access to IT facilities.
  • Governance: Governance is the glue that puts all the pieces explained up together. Governance is related mainly to budget management as well as to following a correct strategy. Compliance to industry and government regulations (like Sarbanes-Oxley, Health Insurance Portability and Accountability Act, and Payment Card Industry Security Standards) is also a relevant process within the governance.

IT Pillars for next years banking strategy

In IT, it is always difficult to develop a long lasting perspective. Cloud computing and the sudden explosion of mobile technologies are an example. Only 5 years ago they both were even not on the agenda of the big players, while SOA seemed to be the solution to any problem.

Given this, in order to develop a good strategy, there are, in my opinion, a few pillars, independent of any direction IT could take, that need to be considered:

  • It will be vital to increase the costumer centricity by leveraging on higher involvement and closer relationship. I don’t think every bank should behave like a social network, but now the relationship company-costumer in many cases looks still the old fashioned one, while clients are much more mature than the actual offer. Within this framework the usage of social network techniques could only bring more added value.
  • There is the need of consolidation, of setting up of realistic strategies, of following step by step approaches and to reach cost effectiveness. IT went trough continuous changes in the past decade: the offer and the list of bank assets grew without a real return on all the investments. In the actual crisis period, it is time to look at what banks have inside, understanding how to make the best out of it and to match it with the setting-up of a clear vision.
  • Banks have to find the right balance between innovation and risk. Needless to say sudden and fast changes don’t go with risk reduction. Nowadays this aspect is still handled without a right balance, so we can find all the spectrum: from low risk approaches (which reduces at the same time the business and increase the costs) to aggressive approaches that generates too many risks even on the short term.
  • Internal employees are an assets and moreover a plus. They must be enabled to work smarter and IT must provide for it.

These pillars are unrelated to any specific IT implementation or technology (and possibly can apply to any type of company): they are the background of any trend or direction. Without these starting any new implementation would mean only following the fashion of the moment.