Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $9.99/month after trial. Cancel anytime.

Software Project Management
Software Project Management
Software Project Management
Ebook340 pages3 hours

Software Project Management

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Software Project Management: Measures for Improving Performance focuses on more than the mechanics of project execution. By showing the reader how to identify and solve real world problems that put schedule, cost, and quality at risk, this guide gets to the heart of improving project control and performance.
• Identify measurement needs and goals
• Determine what measures to use to maximize the value of data
• Interpret data and report the results
• Diagnose quality and productivity issues
• Use metrics data to solve real problems
This is a must-read for project managers and engineering managers working in organizations where deadlines are tight, the workload is daunting, and daily crises are the rule rather than the exception. The text provides simple run rate data through progressively advanced measures, as well as:
• Examples that show you how to combine measures to solve complex problems
• Exercises that guide you through best practices for metric program development and implementation
From beginning to end, Software Project Management: Measures for Improving Performance guides you to improved project performance — long before you turn the last page!
LanguageEnglish
Release dateMar 1, 2006
ISBN9781523096305
Software Project Management
Author

Robert Bruce Kelsey PhD

Robert Bruce Kelsey, Ph.D., is well recognized for his expertise in software engineering and project management. He has authored two dozen papers on software metrics, process improvement, and quality assurance. He is on the editorial or review boards of several industry journals and professional organizations, and as a member of the IEEE Standards Association he contributes to the IEEE learning technology and software standards. Also an experienced course developer and instructor with broad interests, Dr. Kelsey has taught in corporate, university, and community college settings on topics ranging from software quality assurance, to astronomy, to logic.

Related to Software Project Management

Related ebooks

Industries For You

View More

Related articles

Reviews for Software Project Management

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Software Project Management - Robert Bruce Kelsey PhD

    books.

    CHAPTER

    1

    Measures, Goals, and Strategies

    Some of us climb mountains because they’re there. Some of us read books to know we are not alone. Some of us measure software just so we and our teams can survive the project, and some of us measure software to ascertain whether the development processes are in control.

    MEASURING PERFORMANCE WITHIN A PROJECT

    There’s a big difference between measuring the performance of a software development project and measuring performance within a software development project. Software project performance is typically measured in high-maturity organizations. In these organizations, the business processes are documented and audited for efficiency. The development organization has corporate support and funding for formal software quality assurance and process improvement. In high-maturity organizations, the software project as a whole is measured as if it were a complete business process in itself: it’s effective (delivers on time), it’s efficient (meets budget), and it’s profitable (results in margin or profit).

    In such organizations, metrics programs are a tool for measuring project capability and compliance. The paradigms for such software metrics programs are the well-established CMMI® level 4 organizations with fully functional organizational process performance and quantitative project management in place. With their substantial historical data and standardized and audited processes, these organizations can make statistically valid inferences from their measures about the project as a whole, identifying deviations and diagnosing process failures.

    Much has been written about these formal software metrics programs. Any organization that wants to start an organization-wide, executive-sponsored software metrics program can follow the IEEE standards or the CMMI® model. When it comes to implementation details, dozens of books explain how to integrate measurement across the entire product and project lifecycle or how to use the data to improve your organization. When you need more extensive advice, read the classic texts by Grady, Fenton, Myers, Kan, and others (see Further Reading). They’ll tell you what worked for Hewlett-Packard, the National Aeronautics and Space Administration, and a host of other companies with ample revenue streams and a senior management staff committed to product quality.

    The problem is that many software practitioners don’t work for mature, or even maturing, software companies. Some work in situations where the software development organization is one of those challenged cost centers—costs too much and needs attention—but none of the executives knows what to do about it. There’s no enlightened senior management to endorse software process improvement initiatives, nor any highly respected and well compensated consultants to guide the program when senior management interest falters because the results are too slow in coming. There’s no Software Quality Assurance or Software Engineering Process Group to bear the brunt of the work of process improvement.

    Some software practitioners work in dot-coms or MIS/IT shops, suffocating under the pressure to deliver on impossible schedules without requirements, designs, or even adequate resources. There’s simply no room for documented processes when there are four developers in a one-person cubicle. There’s no time for reviews or audits when just getting the code done will take 12 hours a day, every day, for the next 18 weeks.

    You can’t worry about maturity levels when you’re always living on the edge. Project managers, development and test managers, and team leads in situations like these have to find a way around the crisis of the hour. They need software measurement data that they can use in the day-to-day decisions, not in the next quarterly business review. They need to measure their performance against the project schedule, not the project’s compliance with historical norms for Estimate at Completion.

    Measuring progress entails far more than checking off a line item in the work breakdown structure. Projects need to be viewed as organisms rather than task lists. In the human body, different types of cells perform different tasks, in different locations and at different times. In projects, different people in different roles work individually to complete tasks that in turn trigger other people to begin or complete their tasks. You can’t diagnose and treat a disease in an organism unless you know all the symptoms and how they interact. Similarly, you can’t diagnose and treat performance problems in a project unless you know what symptoms to look for, where to look for them, and what to do about them.

    TWO WAYS TO USE MEASUREMENT

    Some people use software measurement like they use a daily weather forecast. They want to know how to prepare for the day, so an overview is all they need. Knowing the estimated average temperature and whether it will be rainy or sunny, they can make decisions about how to dress for the day and whether they should try to run an errand over lunch. Similarly, from a few department-wide indicators, the Director of Software Development can tell whether her projects will complete successfully or whether she should contact a headhunter on her lunch break.

    For this class of measurement users, the details aren’t particularly important. They know that it will be hotter on the streets downtown than it will be in the shaded streets of the suburbs. The temperature isn’t likely to fluctuate far from the forecast. Of course, there’s always the chance that even a sunny day will suddenly turn nasty if certain conditions develop. If the forecast doesn’t show that as a possibility, they’re content to leave the umbrella behind. Similarly, since the earned value across the projects is tracking close to expectations, our Director of Software Development can go out for lunch and not worry about whether the VP will be waiting in her office when she returns.

    Others use software measurement as some kind of archeological dig. They examine papyri bearing curious line glyphs and tablets with weather-worn bar carvings, from which they draw lessons from the past. For these folks, measurements are useful because they reveal how people and processes and projects really work. Measures tell stories about how teams succeeded or why they failed. What was life like on the streets of Cubicle City when the earthquake struck, the build crumbled, and the Atlantis Project slid beneath the waves of the Sea of Red Ink?

    For this class of measurement users, the details are extremely important. They know that software development projects succeed or fail requirement by requirement, code line by code line, defect report by defect report. Trend lines are all well and good so long as the environmental factors don’t change. If any of them do change, the forecast can become invalid, with a sunny morning quickly turning into a dreadful afternoon. This class of measurement users knows that if you have detailed measurement data that shows how different types of events affect people, products, and schedules, then you can improve the durability and reliability of your forecasts and plans.

    These two perspectives are not incompatible. In fact, both are necessary for a successful measurement program. Unless senior management can derive some operational and strategic benefit from the indicators you put on their Quarterly Business Review Dashboard, they aren’t likely to give your efforts much financial or logistical support. On the other hand, unless you’ve demonstrated that you can manage the chaos of your day-to-day tasks, you won’t be around to see them write the check.

    Nevertheless, that’s no excuse to start counting everything in sight. Improperly conceived, a measurement effort can turn into an expensive exercise in arithmetic that causes more problems than it solves. Development teams wouldn’t think of starting to code without first doing some kind of design work. A measurement effort deserves the same care and preparation.

    WHAT IS SOFTWARE MEASUREMENT?

    Put aside for a moment everything you know about software. Forget what you learned from Pressman and Kan. Ignore what the tools vendors have told you. With your slate clean, answer the following deceptively simple question: What are we measuring when we measure software, and why do we measure it?

    On the face of it, the answer seems straightforward. We are measuring a process—the tasks involved in developing software. We are also measuring a thing—the software product’s functional content and its conformance with specifications and quality requirements. This answer, however, merely identifies the two major domains of inquiry process and product. It tells us the areas we want to measure, but it doesn’t help us decide what exactly we want to measure, why we want to measure that instead of something else, or what we ought to do with the data once we have it.

    Those two domains of inquiry are huge, and they span a host of interrelated components. So, there won’t be a simple answer to the question. When we investigate software, we are examining design and development processes, validation processes, customer needs and savvy at various times, code, documents, specifications, online help, etc., etc. To make matters more interesting, very few of these components are actually tangible things.

    For example, requirements drift is not a thing in itself—it’s a change, a delta. For convenience sake, we like to locate drift in the physical difference between a requirements document at time A and time B. That lexical difference isn’t the shift itself, however. It’s the symptom or the trace of the measurement target, which is the event of drift. And that event is very hard to analyze effectively. It might be that the customer simply changed its collective mind. It might be that the systems engineers neglected to probe customer requirements deeply enough to determine the real requirements. It might be that the requirements never really changed, but were just inaccurately documented or inappropriately interpreted during the development cycle.

    Similarly, we often speak of the source code as the end product of software development. Source code isn’t a product in the typical sense of the term, and its transference to a CD isn’t the end result of the process. The source code is a code: like any language, it is the result of experience and thinking and analyzing and communicating. Like any language, it only exists as a language when it is used or executed. The process isn’t complete even when the software is first used by the customer’s employees to successfully accomplish some task. It’s an ongoing process with many exit points and many decision milestones. Between the time the request for proposal arrives and the time the customer signs an end-of-warranty agreement, hundreds of factors are involved in specifying, designing, creating, testing, producing, distributing, using, and evaluating software.

    If software is really a collection of multiple attributes evaluated by many people over a long period of time, just what are we supposed to measure? The simple answer is: We measure what will help us get our work done.

    All measurement has a rationale, a purpose. It has an audience. It is a means to an end. Someone is going to use the data for some purpose. They will draw conclusions from it. They may change project plans or scope or cost estimates based on those conclusions. Those actions will in turn affect other aspects of the project, maybe even the business itself.

    WHAT DO YOU WANT TO ACCOMPLISH WITH MEASUREMENT?

    Since all measurement has a purpose, you should start off by deciding what you want to accomplish through measurement:

    Do you want to show that your project team really is working at over 100 percent capacity?

    Do you want to prove that the product is a quality product, maybe even of higher quality than might be expected given the lack of support you get from senior management and sales?

    Do you want to use the data to help change the workload of some of your staff?

    Do you want to be able (in some loose sense) to predict whether changes will cause more risk, effort, or cost?

    Most likely, you are turning to measurement to help you do one or more of the following:

    • Understand what is affecting your current performance.

    • Baseline your performance.

    • Address deficiencies and get better at what you do.

    • Manage risks associated with changes in schedules, requirements, personnel allocations, etc.

    • Improve your estimation capabilities.

    The one aspect that’s missing from this list and the one you would do well not to forget is bringing visibility to your area. Business executives too often treat software development as a black-box component in their organization. They forget to give it the same attention they give more typical segments of operations like manufacturing, packaging, and shipping. Software development organizations are so used to being treated like some third-party order fulfillment service that they forget that their departments need the same level of attention and involvement from senior management that the operations departments enjoy.

    As a result, your measures should help you address both the financial and the political aspects of software development. You’ll need the usual bottom-line measures such as on time and at spec. But you’ll also want to call attention to the operation of the department within the organization as a whole, showing it as a consumer of the work of some business units upstream in the total workflow, and as a supplier to other business units downstream in that workflow.

    Of course, we don’t want our measures to be just demarcations, lines on some corporate ruler used to see if we measure up to the CFO’s or COO’s expectations. We also want our measures to be feedback loops. When we do measure up, we want our measurements to show us what we did right. When we don’t meet expectations, we want our measurements to show us why and where we fell short and what to do about it next time.

    A COMMITMENT TO THE FUTURE

    There’s an even more significant implication for software project management here. Software measurement is not just a tool—it’s a commitment. Measures aren’t tossed away at the end of the project with all the network diagrams and resource graphs that have cluttered up your desk for several months. The measurement data holds keys to your future success.

    Looked at day to day, every project seems different. The schedule, the deliverables, the customers, the budget, and the resources aren’t quite the same as they were on the last project. There are new challenges, new obstacles to overcome. Yet, in every project there’s the need to know exactly where you are in the project, how much it cost to get you to where you are today, and how likely it is that you will complete the project on time and on budget.

    This is where software measurement becomes a commitment. Put in place the measures you need to be successful, not just in today’s project but in next year’s projects. Don’t assume that software metrics is something a development manager or a Software Engineering Process Group is supposed to do. If there’s no measurement program, implement one. It’s in everyone’s best interest, most of all yours!

    THINK GLOBALLY, ACT LOCALLY

    It takes a long time to move organizations up the maturity ladder or to get them fully functional as ISO 9001 companies. The entire organization has to change the way it works, and organizations move slowly. If you work in a company that is not interested in IEEE 12207 or CMMI®, there are limits to what you will initially be able to accomplish. For example, many high-maturity, program-level measures require coordinating data across departments, and it is unlikely that you will be able to get this data early in your efforts.

    Nonetheless, you should

    Enjoying the preview?
    Page 1 of 1