2011 Session Descriptions

Session Descriptions
2011 Annual Higher Education Data Warehousing Conference Session Descriptions   John Rome – Arizona State University, No Longer a Blackbox: Unraveling the Mystery of LMS DataLearning Management Systems (LMS) are vital systems to our institutions, but until recently, the data inside has been a mystery. Arizona State University has done a deep dive into this data and will share information they discovered in the process. Questions like “How many courses are offered in Blackboard this term?”, “Is anyone using SCORM modules in Moodle?”, and “Which students haven’t signed on to their LMS course this semester?” were hard to answer. ASU’s Business Intelligence team decided to dig into this data and “Unlock the LMS Blackbox”. The presentation will present their findings and provide ideas on how other institutions can use this information for LMS course inventory, student behavior, content used, learning outcomes, etc. They will share the tools, techniques and resources used to gain a deeper understanding of this information.

Lance Tucker, Ravindra Harve – Boston College, How to Bore Your Auditor

In 2010 our Data Warehouse project of eight years received its first formal audit. Although a data warehouse is similar to other applications that receive audits, we learned that there are a variety of concerns specific to a data warehouse environment. We have been working with our auditors to develop solutions to the audit items that are both relevant and practical. Our technical team is quite small, the business functions and roles are just starting to mature, and some of the auditing recommendations are quite daunting. A number of initiatives have been defined to address these issues including data security classification, account management, and formalizing some relationships with the business components of the warehouse. This presentation will address the top 11 things for 2011 that we learned may be helpful for a successful audit. This presentation will review both technical and process ideas for meeting auditing requirements. Our data warehouse is Oracle based with users using both Cognos and Oracle Discoverer BI tools. The presentation will conclude with a demo of a work-in-progress of our data warehouse administration application that we are building to manage several of the items addressed by the audit.

Ken Diefenbach – CQUniversity Australia, Introducing Dashboards to a University – an education in itself

This presentation traces causes and effects, ups and downs, and most importantly the lessons learned when BI moved from a purely operational to a strategic tool at CQUniversity.

Jordan Meyer – Humboldt State University, Open Source Web Toolkits for Advanced Information Visualization

Academic researchers in information visualization are developing toolkits to handle the onslaught of complex data sets on the web. This is happening largely within the emerging field of data journalism, where visualizations must be both informative and attractive in order to draw in readers. Though many of these tools can be of significant benefit to the DW/BI community, they remain largely unnoticed. This presentation will introduce three of the most powerful toolkits available for advanced visualization: Protovis, Processing and Prefuse. Examples from each, with integration techniques for existing BI suites will be demonstrated, as well as a comparison of their capabilities and limitations.

Phyllis Wykoff – Miami University, Getting Started with BI at Miami University

Like most Universities, Miami University has a great deal of data but struggled to be able to provide the information executives require for making strategic decisions. A Business Intelligence (BI) solution was the apparent answer to this issue. To gather support and funding for a full BI solution, IT Services partnered with key client office to develop a solution of a pressing business issue. The solution required that data from the Bursar, Registrar, Human resources, Payroll and Finance be combined to provide a comprehensive view of course profitability for summer courses. Over five months the team defined, developed and deployed star schemas and data cubes to provide the needed data for decision making. This project was completed using the tools already in use at Miami and with a minimal budget for consulting.This successful project allow Miami to see firsthand the benefits and potential of a full scale BI deployment. Funding has now been secured to purchase a BI Tool Suite and the selection process is underway. This session will focus on the initial project and lessons learned during (and after) the project.   Luna Rajbhandari – Northwestern University, Establishing a brand new BI initiative

Northwestern University began a brand new BI initiative in 2007. Since then, BI has been adopted across various areas of the University, enabling reporting, analytics and strategic decision making. I would like to share my experience in establishing this BI program: 1) Background2) Developing the Infrastructure, Architecture and Deployment Model2) Forming a brand new BI team3) Engaging the University Community4) Governance, Project Selection and Prioritization5) Solutions: Past/Present/Future6) Successes and Challenges7) Lessons Learned8) Future Plans     Ted Bross (convener)- Princeton University,

Kathleen Dettman (University of California, Office of the President), Tim Moore (Virginia Tech), Jeff Glatstein (UMass, President’s Office), Scott Thorne (MIT), BI Tool Selection Success Stories

Over the years, one of the hot topics has been BI vendor selection. This panel will include 3-4 people who have actually gone through the process successfully and would like to share their experiences with the group at the conference. The panel will be a mix of vendor products, including, but not limited to Cognos, Oracle OBIEE, and Business Objects etc. This could be viewed from either the business or technical end, or even from both perspectives.


Steve Grantham-Boise State University, Capturing Complex Reporting Logic in the ETL: Some Examples In order for a data warehouse to fulfill its promise as a single source of truth, it must provide for complex logic to be moved out of individual reports and captured accurately and definitively within the data warehouse itself.   The desired logic must be well defined, agreed upon, and incorporated into the ETL, and in some cases, new dimensions or measures must be created. In this presentation I will describe several specific examples in which I incorporated such logic into Boise State’s iStrategy data warehouse, with the hope that seeing these examples may be helpful to those who are contemplating similar work.


Jeffrey Stark- Rensselaer Polytechnic Institute,  Advancing your Development Office One Dimensional Model at a Time

As with most Institutions, Rensselaer’s Advancement Office requires information on a daily basis to support key development initiatives. Requests for data vary from adhoc detailed reporting of a donors activity to prospect management and research, stewardship reporting and ongoing progress of development initiatives. In this presentation we’ll take a look at Rensselaer’s dimensional modeling approach to address the Business Intelligence needs of our Advancement Office.

Keith Cushing- Rensselaer Polytechnic InstituteInstitute, Advancement: Front-End Perspectives on a Business Intelligence Solution

In this presentation, you will learn how the data warehouse team at RPI handled various front-end aspects of a data mart implementation with its Institute Advancement office. Attendees will see how the team approached requirements gathering, testing, prototyping, rollout, and training. Whether you’re just in the planning stages or currently working on a BI solution for your Advancement office, you’ll gain insight from this presentation on RPI’s front-end Advancement data mart experience.


Eric Larsen-Seattle University, Metadata-driven load processing at Seattle University

This presentation gives a technical overview of the design and processes around the metadata-driven load processor used by Seattle University’s dimensional data warehouse. Metadata-driven processes accelerate integrating new data as well as implementing new dimensions and fact tables. This also decreases the amount of development required to implement new data warehouse objects. As part of load processing, in-depth data validation is also metadata-driven and allows for targeted reporting of data errors to the appropriate offices on campus. Currently, the load processor manages both the staging and reporting schemas that encompass 280 staging tables, 110 dimensions, 130 fact tables, and 87 custom script tasks. 4 major systems are integrated with the ERP being the primary source. The data warehouse has been fully live since May of 2008.   Grzegorz Grabowski, Seattle University, Jamie Balducci, Designing and Maintaining Automated Census Reports

Census reporting is an important function of many IR Offices since it provides the official statistics for institutions. By definition, Census reports not only need to show an information snapshot at a given point in time but they also have to be accurate, available, perform well, and be reproduced anytime. In my presentation, I will describe how IR at Seattle U was able to move from manually generated Census reports to a more streamlined and automated reporting solution. Key elements of the transition include: 1. Creation of a Data Warehouse with point in time functionality 2. IR and OIT team collaboration 3. Development of an override table to correct unresolved data errors for Census Reports, if needed. 4. Used the flexible report design in Microsoft Reporting Services to integrate static census reporting with ‘’current’’ reporting. 5. Utilizing a custom table for managing census dates with the metadata required for reporting. 6. Successful integration of custom tables within data warehouse schema.

Jamie Balducci, Seattle University, Getting information to the people who will use it

There are two major hurdles for data warehouse interface design: End users need to know how to use the tools, and understand the information provided. This session will describe how we, at SU, have tried to hit the right balance on these hurdles as our report designs have evolved over time. Some concepts include: Building a foundation of common data source views for areas of knowledge – one source of truth. Report metadata, labeling etc. provide “road signs” for rapid adoption. Develop simple tools where users don’t need to understand queries to get the data. Develop a mix of aggregate reports and roster reports to meet a variety of needs. Training to “teach them to fish”: leveraging peer support, excel capabilities, and flexibility to promote self service. Next steps: reports for advanced users who could leverage tools like report builder to self serve.     Bob Duniway, Seattle University, From Novice to BI Leader: Keys to Developing and Implementing a Successful BI Strategy

How do you design and implement a successful enterprise data warehouse or other campus wide BI initiative when you’ve never done it before? What does it take to overcome the doubt, confusion, and even active resistance which are too often the initial reactions when proposing to take a major step forward in turning data into accessible and useful information? Equally critical, how do you avoid unrealistic expectations that lead to disappointment and loss of support for ongoing BI efforts? This overview session on developing and pursing a BI strategy will address critical planning work, skill assessment and development, team building, project management strategies, system governance, user support, budget and human resource requests, and alignment of the BI strategy with institutional priorities. If you are new to BI you will leave this session with a general framework for successfully pursuing a long term strategy at your institution. If you have BI experience I hope you will still gain insights about how to enhance the value of your BI efforts through broader integration with planning, operations, and assessment at your institution.

Denice Inciong, South Orange County Community College District, Tamara King, Systems Manager and Tasha Trankiem, Programmer Analyst – SOCCCD Using SharePoint to manage our Community College Data Warehouse’s Metadata Process

We started our data warehouse with a very loose set of standards on metadata and as we developed reports things quickly got out of control. Over the past year we have created a report catalog, a set of reporting standards, and a system to document data element information as well as calculations in reports. We have three main pieces in our process.• The Report Catalog- inventories all of the reports in inFORM, as well as our IT transactional system.• The inFORM Glossary – has the metadata for our database data sets.• The Reporting Glossary – has the metadata that is embedded in our reports. We had looked at outside software, but with our internal team of developers we created our own system using SharePoint lists and Reporting Services to manage our metadata. The goal is to have this metadata easily accessible so that we eliminate redundancies in request for reports, establish a standard look and feel for navigating reports and assist in training users find reports and information they need. It was a difficult and sometimes painful alignment but we have come to feel this effort has created a successful process appreciated by management, developers and especially our users.

Tony Lipold South Orange County Community College District Denice Inciong, Director of Research and Planning and Nicole Ortega, Research Analyst – Saddleback College Athletes Scoreboard: A case study on cohort tracking in a Community College

Data WarehouseTracking and reporting on specific cohorts of students was a challenge in our District’s inFORM Data Warehouse. Our Athletic Director wanted to monitor and evaluate his intercollegiate athletic program but no tracking mechanism to identify athletes existed. The Athletic department manually kept detailed spreadsheets on the athletes’ eligibility and progress and they were not able to easily conduct any longitudinal analysis on their student athletes. Using the inFORM Data Warehouse we created a process that tracked the students. Data cubes were built to measure student retention, GPA, success and retention rates, and transfers. Our Athlete Dashboard (called Scoreboard) visually displays this information quickly to the AD and his coaches. The Athletic Director uses this information to monitor athletes’ eligibility requirements, help his coaches monitor player’s academic performance, and measure his athletes’ transfer rate. It has been very successful in illustrating to the college community that athletes are productive and successful students who contribute to the overall college success. Additionally, this information is used to advocate for athletic programs in a time of budget cuts across the state of California. inFORM is built on a Microsoft Platform; stored in SQL databases, reports built with Reporting Services, and delivered through a SharePoint portal page.   Helen Ernst, SUNY System Administration Designing the Cost Calculator Data Warehouse

The Higher Education Opportunity Act of 2009 (HEOA) requires all higher education institutions to post a Net Price Calculator on their websites by November 2011. In this session we will briefly review the requirements based on the HEOA act, then delve into the Data Warehousing and Business Intelligence projects underway at the State University of New York System Administration to provide an enhanced and flexible calculator for it’s 64 campuses. Ample time will be left at the end of the session to have an open discussion on how other schools are providing this functionality.   Susan Dastour, The George Washington University, Michael Wolf, Manager Data Warehousing, Planning to Fail – A case study in project management in a data warehouse environment

Project Management is a unique and challenging profession that rarely requires any data warehousing expertise. However, in the process of implementing, maintaining, or expanding a data warehouse, analysts and developers often find themselves called upon to manage technically demanding and complex projects. The GW Data Warehouse team recently completed a project that tested a full production fail over from one campus to another. Using this “GW Data Warehouse Disaster Recover Test” as an example, this presentation will highlight the lessons we learned while managing this project. Themes such as IT governance, cross departmental collaboration, understanding organizational priorities, and common project difficulties will be discussed. We hope that other data warehouse professionals will be able to capitalize on our successes and learn from our mistakes as they manage their own projects in the future.

Kimberly Griffin,The University of Chicago, Dan Barrett, Sr. DW Developer, Implementing a DW in parallel with a new application: How not to go nuts!

Building a data warehouse for a source system already in production is challenging enough; the job becomes even trickier when the source hasn’t even been configured, let alone built! We’d like to share some lessons learned from both the BI and DW perspectives during the planning, design and build of a data warehouse in conjunction with the implementation of a new application system (in this case Click Commerce Grants and IRB modules). The presentation will include:-Planning:• Fight for resources!• Coordinating the project plan and milestones (and explaining why reporting needs to lag behind)-Design:• Strategies for piggybacking application and reporting requirements gathering• The art & science of reading functional specs from a data warehouse point of view• Incorporating reporting needs into the application design• Coping with an ever-changing transaction data model and inability to conduct data profiling, and the impacts on source-to-target mapping-Build:• Building a BI layer on top of an empty database• The extract – easier to manage when the DW team writes it• Maximizing transform & load coding before extracts are available-Test:• Planning ahead with the application team to coordinate test data that meets reporting needs• Ensuring that test data is delivered in sufficient time for reporting to test thoroughly-Manage: • Tips for integrating a reporting team into a bigger project team   Fred Friedrich, The University of Texas at Austin, Governance for BI – UT Austin’s Success Story

Presentation covers how the University of Texas at Austin has established governance in support of its Information Quest (IQ) business intelligence initiative. The presentation will begin by listening and noting key questions from the audience about what they would like to get out of the session, then address these issues as the presentation covers UT’s model and how the structure works in concert with other key success factors in achieving a vital and productive BI enterprise for the university. Presentation Objectives:• Define what governance is, e.g., in view of other oversight responsibilities and stakeholders involved.• Key factors to contemplate and address in setting up your own governance model given your campus and project vision for BI.• Overview the Governance Model used at UT Austin for IQ, how it works and why.• Overview other success dependencies outside of governance structure. • A success story and an unsuccessful story….why good governance helped one and limited the success of the other.

Karen Weisbrodt, The University of Texas at Austin, Ross Hartshorn, Senior Software Developer/Analyst, and Vince Gonzalez, Business Analyst, Comparing Ourselves with Others: Lessons Learned in Using Data from Peer Institutions

The need to compare data from your institution to that of state and national peers presents its own unique set of issues and considerations. At the University of Texas at Austin, the past year included three significant efforts centered around comparative data: the NRC (National Research Council) assessment of research doctoral programs, the creation of an internal reporting tool for administrators that includes benchmark data from peer institutions, and a large number of requests from state legislators for comparison data in areas such as student demographics, financial metrics, degrees, and research. Staff from UT Austin will share their experiences from these projects as well as lessons learned from both a technical and methodological standpoint.   Jenna Allen,  University of California Office of the President, Developing Data Warehouse Business Requirements for Higher Education: Key Lessons and Templates to Get You Started

The University of California, Office of the President is building an enterprise data warehouse called the Decision Support System. A partnership between business and IT, the Institutional Research Unit and the Information Resources and Communications Office have been working together for 2 years to design and implement this solution. Business Requirements Gathering has been at the center of our efforts to ensure user acceptance of the DSS and is embedded in many steps along the way – from definition of broad scope through data modeling, testing, and report/query production. Institutional Research (IR) is tasked with gathering the business requirements for the IT team. The complex nature of the Higher Education enterprise poses unique challenges for this process. Working with consultants and our IT team, IR has developed both a process for engaging our business users and gathering their requirements as well as a portfolio of deliverables and documentation. In this session, we’d like to share our experience as well as templates from two rounds of Business Requirements Gathering from Phases 1 & 2 of the Decision Support System (Payroll/Personnel and Student and Instruction respectively).   Michael Wonderlich, University of Illinois Data Visualization – Using the Right Tool for the Right Job

Today’s decision makers need current information they can assess quickly and accurately. Visualizing the data through graphical representation conveys meaning at a glance. But choosing which tool to use requires understanding of the need and the features of each technology. Many institutions select one tool and try to use it for all data visualization needs. This often results in a poor user experience that leads to low user adoption rates. This presentation will discuss the factors to consider when selecting your first data visualization projects. Recognizing the differences between dashboards, interactive analysis or graphical reports is crucial to fitting the right tool to your project. Forcing a tool to deliver functionality that is not the primary focus of the tool often results in an awkward experience. Many tools have the flexibility to build and deliver multiple types of graphical products; however if the technique is not aligned with the tool’s strength then your development costs may be too high from a lengthier development effort. Understanding the factors will lead to using the right tool, finding the right project and building a positive user experience.   Rick Getty, University of Illinois, Beth Ladd, Waterfall Withdrawal: Delivering Datamarts on a Dime with Agile.

Everyone is talking about Agile. They say the benefits are that you can deliver faster results, changes are easily incorporated, and teams are self managing requiring less management overhead. So why aren’t you doing it? Moving from a traditional project methodology with rigid timelines and clear deliverables to an Agile framework can be scary for the team and a hard sell to project sponsors. The University of Illinois faced our fears and overcame them to successfully deploy a complex Datamart. This presentation we will compare and contrast waterfall and Agile project methodologies through giving a brief history of BI project management at the University of Illinois. We will provide a case study of our pilot project which used an Agile framework to build a Datamart, describe the challenges we face and the lessons that we’ve learned. Finally, we will describe the hybrid methodology in use today and provide tips and tricks for getting started and for modifying Agile methodology for use in your BI shop.

Shahriar Panahi, University of Massachusetts President’s Office, Jeff Glatstein – Director of Information Analysis and Delivery; Sean Blood, Product Manager, Integrating Student related Predictive Analytics

With BI Use of predictive analytics is common amongst many facets of higher education administration. But majority of the BI tools are not capable of this analysis and don’t easily integrate well with tools that are. We intend to propose a methodology for integrating this capability in the Enterprise Data Warehouse.We will explore the example of Student Retention and show how predicting the students’ propensity to leave using sophisticated predictive analytics can be used within the BI context to deliver actionable information to users who can intervene. A detailed framework for integrating statistical analysis and BI will be presented.

Bodo Rieger, University of Osnabrueck, Germany, Ellen Hoeckmann, Sonja Schulze Data Warehouse-based Decision Support for Higher Education Management

This presentation illustrates the implemented concept at the University of Osnabrueck using a flexible, analytical reporting based on a data warehouse to support users’ strategical as well as operational decisions effectively. Increasing (inter)national competition in higher education requires efforts of all university members to improve personal and organizational performance. We will present decision support applications for all kinds of members, ranging from executives to faculty, including students as well. Originally started as a pure reporting project in 1998, R&D activities were soon redirected towards decision support for university core processes due to user requirements and acceptance issues. Functions include benchmarks for students to monitor personal performance, exam scheduling using heuristic optimization for faculty members, balanced scorecard based performance monitoring and redesign of study and research programs for campus executives. Actually, users include about 10000 students and 1200 faculty members in 175 programs of study. Presentation focus is given to the critical success factors, e.g. early and continuous involvement of users, serving students and open-minded faculty first, and ongoing adaption to changing requirements, recently including mobile apps.   Bill Yock, University of WashingtonLife, Liberty and the Pursuit of Data! A constitutional democracy approach to data governance at the UW

The University of Washington has created a “constitutional democracy” approach to governance of data management practices. The UW Institutional Data Management Standards document serves as our base “constitution” setting forth fundamental principles, definitions and responsibilities. It establishes three primary governance bodies of Data Trustees, (Executive branch) Data Custodians (Legislative branch) and the Data Management Committee (Judiciary branch). The rules of law are established by fundamental principal statements (Bill of Rights) as well as guideline documents (Amendments). Legislative districts were established by drawing up boundaries using a “Data Map” that indicate system and business domains. This constitutional democracy approach has stood the test of time and proven itself to be a sustainable and repeatable process serving the citizens of the republic well. This presentation is an excerpt from an article published in the “Information Management Best Practices – Volume 1” book from The Information Management Foundation. www.timaf.org

Lizabeth (Betsy) Wilson, University of Washington Data Management Committee members, Data Management at UW: Hub and Spokes (Panel)

The Data Management Committee (DMC) is the policy and standards body for information management at the University of Washington. On the DMC, appointed data custodians and trustees for the University’s main data domains, author foundational policies, such as the UW Institutional Data Management Standard, define and enforce data access controls for enterprise systems, including the Enterprise Data Warehouse, and publish binding guidelines on data management practices and decisions. This panel introduces the breadth of functions at this much decentralized institution through the eyes of representative committee members. Up close and personal, DMC members will demonstrate DMC policies at work for them, including: the Dean of Libraries and Committee Chair will describe the roles and responsibilities of data management at UW and leadership of bi-weekly committee meetings; the Office of Planning and Budgeting will attest to the importance of metadata management and institutional definitions in creating enterprise reports, the Financial Management Office will bear witness to the impact of expanded access to financial data across the entire University; the Associate Vice President for Human Resources will illustrate how guidelines on the use of Home Address information safeguards privacy while providing necessary data for emergency procedures; the Registrar will attest to the relevance of the Social Security Number standard in student source systems, the Chief Information Security Office and Office of Public Records will speak to the alignment with other policies including security and record retention. Live in Seattle, this is an opportunity to hear first-hand from this nationally recognized committee. Panel provides excellent follow-on to talk “Life, Liberty and the Pursuit of Data! A constitutional democracy approach to data governance at the UW” (Bill Yock) with plenty of opportunity for questions.   Christina Klawitter, University of Wisconsin-Madison Kathy Luker, Consultant–Office of Quality Improvement, Leveraging Information Resources to Improve Undergraduate Retention and Graduation Rates

The University of Wisconsin System has set a goal to graduate 30% more students per year by 2025. In support of this goal, a team of end users, data experts and query writers at the University of Wisconsin-Madison campus has collaboratively developed queries and analytic tools which support decision-making and retention-focused advising interventions. Previously, certain student populations were difficult to identify, including returning students, students without majors, and students nearing graduation; and academic performance trends among the most at risk student populations were not easily discernable. The presenters will discuss the value of utilizing a cross-functional team to develop data resources aimed at improving retention-related business practices such as identification of various at risk cohorts, student communications, intrusive advising strategies, and the development of College policies.   Billie Watts, Western Washington University, Course Enrollment Management Using Traditional Reporting and Business Intelligence Techniques

A simple flat style report using course history detail and projected enrollment counts reduced the number of courses offered by 10%-15%. Course data on the report included number of sections and enrollment over three academic years and current enrollment and waitlisting. Based on these enrollment numbers and next year admission’s projections the number of courses and sections needed were calculated. After eliminating 10-15% of fall quarter courses offered in the first year of use we had the best course access in ten years. New waitlisting numbers added to the report have allowed the creation of multiple new sections of courses that students want to take rather than just courses faculty want to teach. Additional business intelligence web reporting was accomplished using MS Excel spreadsheet pivot tables, Microsoft Analysis Services cubes, and asp.net graphical front ends. Our four web solutions provide interactive views on courses offered, student credit hours, declared majors and awarded degrees. This reporting is used by our Provost, Deans and Department Chairs to see trends in enrollment, programs, courses, and degrees. These solutions have provided information in difficult budget times that enabled executive management and departments to make Western Washington University leaner and more efficient.