2013 Session Descriptions

Session Descriptions PDF Print E-mail edit
Libby Barlow; University of Houston      It’s About Partnerships: How Evolving Partnerships Have Built a Warehouse     When the need for a data warehouse preceded both the understanding of what a warehouse is and the funding to develop it, the University of Houston got the project off the ground with a single internal sponsor outside of the Information Technology staff.  This presentation traces the development of a warehouse environment highlighting the shifting partnerships that have built it and will continue to support it.  Partners include the initial sponsor, business owners, data experts, technical staff, IT staff, and the budget office.  We will delve into the governance concept originally created to marginalize IT’s role and subsequently to fence in warehouse resources within IT.  We will also outline the “pay to play” resource model that has grown the warehouse to multiple databases with multiple business owners, and we will discuss the use of license management and an alternate presentation tool to extend accessibility to the needs of multiple constituencies.  Finally, we will speak to how business owners work outside of their data silos on common data elements and to create linkages across tables and cubes in the warehouse.



Neil Belcher; Cornell University              

Dimensional Modeling: A Bottom-Up Approach              

The Challenge: How do you design and implement a Dimensional Datamart in parallel with the ERP’s implementation? During any ERP implementation the majority of focus is on the application implementation and the BI solution is often an afterthought. How can you design a datamart without clear requirements from the functional leads? Can you get reliable answers to your functional questions when the users are learning the new system themselves? Do you have any sample data specific to your implementation to design from? Even without clear requirements, my experience has shown that the application data can lead you to a foundational datamart design based upon irrefutable facts and obvious dimensions in the application. I call this the “Bottom-Up” approach to data modeling. Traditionally, a “Top-Down” approach is used where clear requirements are first gathered and then synthesized into a conceptual, logical and physical design. The “Bottom-Up” approach uses the reverse order by first examining the physical source and using that to derive the physical/logical designs. Later in the project, a parallel “Top-Down” approach is conducted to refine what is delivered from the “Bottom-Up” approach.



Kim Berlin; Stony Brook University        

Tuition Revenue Model Integration for Academic Instruction, Enrollment and Teaching              

Learn how Stony Brook University applies tuition revenue throughout student records data models. We initiated this to determine how much tuition an academic department generates through course enrollment and enrollment in majors and our graduate programs.  We can tell, for example, how much revenue is generated from tuition by faculty member, by majors and graduate programs, by class subjects and all roll up to academic departments and financial accounts. I will step through how tuition is picked and distributed, the challenges and decisions, and the applications for these models to departmental profiles, faculty workload, RCM budgeting models and more.



Elsa Cardoso; ISCTE- University Institute of Lisbon, Portugal; co-presenter Louise Nelson, University of Texas at Austin               

An Agile Business Intelligence Manifesto for Higher Education               

By following agile methodologies, business intelligence project owners can deliver useful results more quickly to stakeholders. Elsa Cardoso of the ISCTE-University Institute of Lisbon in Portugal and Louise Nelson of the University of Texas at Austin will provide examples of ways that higher education institutions at different levels of BI maturity can apply agile techniques when defining, scoping, building, and delivering business intelligence solutions. Whether your institution is at an early stage of BI implementation or looking to expand on an existing program, an agile approach can provide your customers with the results they want on the timeline they need.  Objectives of this session: (1) Identify the key success factors for achieving high-quality BI solutions; (2) Show how the agile manifesto can be adapted for use in the BI development process in HE; and (3) How to start from where you are: assess the BI maturity level of your HE institution and build a plan from that.



Darren Catalano; University of Maryland University College     

Advanced Analytics @ UMUC   


UMUC is rolling out a series of ‘Next Generation’ dashboards to key stakeholders throughout the University.  These dashboards are visual, interactive and mobile to promote user engagement and exploration.  Behind the scenes, is a complex data model that combines disparate data from our Student Information System (SIS), Customer Relationship Management (CRM) and Learning Management System (LMS) among other University systems that allows UMUC to build cross-functional subject areas.  In addition, UMUC has kicked off a Predictive Analytics initiative focused on improving student outcomes.  At this session, we will discuss the process and principles that went into building these tools as well as provide a demo of the UMUC dashboards and Student Success Application.


Marco Cestaro; University System of Georgia   

Reviving Data Governance Within the University System of Georgia     

Come learn how the University System of Georgia (USG) has been working to establish a formal data governance structure that tries to balance the needs of a system office with realities of working with multiple, diverse campuses. A review of the business drivers, how changing administration, and development of a more stringent data have impacted the process will kick-off the presentation. This will be followed by what the governance structure looks like, how the various groups are intended to function, and who are the players at the various tables. Lessons learned, next steps, and a chance for the group to discuss their data governance experiences will round-out the presentation.  Every institution, big small, public private, in a system or not, has the need for data governance. USG has learned just as much from programs that have been successful in small schools as it has for major corporations. This presentation is for anyone who is involved with data governance or how information is used on campus.



Hank Childers; University of Arizona     

Business Intelligence at the University of Arizona – a Case Study             

The University of Arizona has made a significant investment in business intelligence over a four-year period. This presentation will cover objectives, budget, technology, team organization, delivered capabilities, working relationship with campus, approaches taken, issues encountered, quantitative and qualitative results, current initiatives, and ongoing challenges.  The broad scope and high level of effort and investment over a relatively short period of time provides an especially good opportunity for other institutions to profit from the experience of the UA.



Charles Drucker and Charles Masten; University of California, Office of the President  

Life after graduation:  Triangulating disparate data sources to describe students’ employment and graduate school outcomes           

The question of how college graduates fare in the labor market has become increasingly important as policymakers, researchers and accrediting agencies scrutinize the value of a college degree.  Recently, several states have initiated projects to assess the employment outcomes of students at their public universities using state wage data.  The University of California is currently analyzing some 10 million state wage data records for roughly 500,000 individuals who received any type of degree since 1999, reflecting wages earned in California between 2000 and 2011. The data management challenge confronting this project involved merging individual wage records with data from UC’s student information system, as well as data from the National Student Clearinghouse to identify recent graduates pursuing advanced degrees.  In this presentation, we will discuss the process of acquiring, preparing, and analyzing these datasets and share systemwide findings on post-graduate earnings and employment for these cohorts in California. In addition, we will discuss how this data can be an asset to campuses in their student research and academic planning activities, as well as a means of evaluating results stemming from traditional methods of assessing post-graduate earnings outcomes, such as alumni surveys.



Christina Drum and Mike Ellison; University of Nevada, Las Vegas          

Growing Our Own: An Agile Approach to Metadata       

This presentation will describe the design concepts and technical approach underlying UNLV’s development of a home-grown, extensible, enterprise Metadata Repository.  Several factors influenced our approach.  Ultimately, we wanted to manage multiple subject areas of metadata that could each stand alone, potentially with its own set of applications.  Concomitantly, we wanted the ability to establish associations and track a complete lineage of relationships across the various subject areas.  Resource constraints required us to build this out over time (years), so we needed a design that readily allows for expansion.  We also knew that some subject areas would be updated via automated processes, while some must be manually tracked; we wanted the ability to develop and integrate both.  These considerations led us to develop a central metadata repository, maintained in a SQL server database, and designed around a type/subtype paradigm borrowed from object-oriented programming.  The repository is populated with several integrated subject areas, including: Data Definitions (informational elements having strategic value to the institution); Relational Database Metadata (RDBMS systems, tables, and columns); ETL Metadata (data warehousing jobs and sequences); Reporting Elements (as presented in BI interfaces and applications); and Project Management Metadata (used in data mart development for the distributed collection of requirements and business metadata).  Additional areas of interest to be developed include Business Process Metadata, the tracking of Data Feeds, and Data Stewardship responsibilities.  UNLV’s Metadata Repository has become a central component of our (young) institutional data warehouse and BI initiative.  We hope that sharing our approach will be of value to others who are seeking comparable solutions to “the metadata problem” in their organizations.



Eric Elkins and Alice Few; University of Washington      

The Institutional Research Office, a Catalyst for Change in Data Warehouse Design       

This presentation illustrates the impact of Institutional Research on data warehouse design and development.  As decision making entities increase dependencies on accurate and timely data, the Institutional Researcher finds herself in a position of analyzing and reporting in more detail and with more frequency, taxing the architectural limits of a traditional data store.  Consequently, the IR’s once limited audience has expanded it requiring a more sophisticated, enterprise level solution.  Our goal is to demonstrate how IR has evolved from getting what it can from disparate, heterogeneous data and formal data stores, to becoming a partner in development and design.  This partnership illuminates the importance of metadata, master data, data consistency and availability, data granularity, and how front end tools influence the back end structures.



Chris Frederick; University of Notre Dame         

If you’re not doing “In Memory” BI, you should be.       

Notre Dame is using “in memory” BI tools to wow customers in a fraction of the time.  This session is a live demonstration on how to build an analytic solution using Microsoft’s free PowerPivot “In Memory” engine.



Jessica Greene, Ravinda Harve, Dan Riehs; Boston College        

Leveraging Business Intelligence to Develop a University Key Indicators Dashboard      

Current trends in higher education management emphasize the need for careful review of core metrics to assist University leaders with strategic planning and accountability efforts.  This presentation will describe how Institutional Research and Information Technology have worked collaboratively to implement a Key Indicators Dashboard delivered as a Cognos Active Report.   Two main topics will be addressed during the session: (1) presenters will outline how metrics and comparison institutions, the primary components of the dashboard, were generated and the challenges associated with the demand for value-added analyses within the dashboard; (2) presenters will also discuss the ETL methods used to integrate a variety of data sources (IPEDS, survey responses, data warehouse data) at both point-in-time and trend views and the trials surrounding the report’s presentation as an IPad-ready Cognos Active Report.  The final version of the Key Indicators Report will also be demonstrated enabling end users to assess how business intelligence tools can be leveraged to meet business needs.



Todd Hill; University of Notre Dame     

Framework to Start (or Restart) a BI Program     

The purpose of this presentation is to provide a framework for those organizations looking to start or re-start a BI program.  It is based upon my personal experience at Notre Dame where I was asked to run a BI program that had run aground.  Here is some business context:

  • In our first attempt to build an enterprise-wide data warehouse, it took us 18 months to deliver and we still had not successfully delivered it
  • Customers were extremely frustrated with what we were able to deliver
  • There was low morale with our IT staff because even though they were working hard, we were not able to deliver results
  • There was internal conflict on who should be running the BI program and role and responsibility confusion among the team
  • There was indecision on which BI tools we should be using

In my assessment, we had to do two things: simplify our approach and deliver results quickly.  The framework presented was developed as a result.  It is not meant to be prescriptive in any way as I believe there are hundreds of different ways a BI program could be done successfully.  This presentation would present the why and what behind key BI program activities, but leave the how up to individual organizations.



Tim Huckabay; Northwestern University             

Something new under the stars: agile data modeling for BI

Must data modeling always be a bottleneck in our agile BI projects? Starting development without a complete data model seems risky, but are there ways to mitigate the risk and deliver good stuff to our users sooner? I will describe experiences with star schema and Big Design Up Front and their actual effects on the timeliness and quality of the results. After an analysis of some of the main issues encountered, I will describe some alternative approaches to design and data modeling and how I’ve borrowed ideas from them to get development started quickly and mitigate the risk of changes. I will include examples from Higher Ed subject areas such as Alumni Development and research support.



Jane Kadish; University at Albany, State University of New York

Bringing a Business Intelligence Delivery Strategy to Your Campus – the Culture Change from Project to Program           

The University at Albany is implementing a Business Intelligence Program in partnership with Institutional Research and Information Technology.  Our mission is to ensure that stakeholders, in administration, colleges, and departments have the information they require for operational and strategic decision making.  Our goal is that this endeavor not be a one-time project, but a program, incorporated into the decision making culture of the University, providing accurate and reliable data that decision makers can use with confidence.

Points we will cover include:

1.            Establishing direction

2.            Getting buy in and approval

3.            Proof of Concept

4.            Project strategy

5.            Determining the IT requirements

6.            Funding and manpower (resources)

7.            Selecting tools within our fiscal constraints

a.            Training

b.            Consulting

8.            Putting a Team Together

9.            Timeline

10.          Initial Rollout

11.          Creating a production system

12.          Influencing the University decision making culture


Kristin Kennedy; Arizona State University          

iRetention: How Arizona State University made a retention dashboard work for everyone        

Arizona State University has over 73,000 students for Fall 2012.  A student population this big comes from many diverse backgrounds, culture and socio economic levels; a true microcosm of the world’s population.  In order to help all these students succeed and continue on with their education, we knew that no one policy of retention would work across the board.  As a result we brought together functional areas and data from around the University to build a dashboard that would become a gateway to individualized retention efforts meant to help each student succeed in their goals and continue on in their education.  We will show the key functionality this dashboard provides as well as the process of how it came to be and how it is being used across the University.



Nancy McQuillen and Yan Ren; University of Washington          

On the Road to Managed Metadata       

The University of Washington’s metadata team is pursuing a two-part agenda in 2013 to 1) improve the understanding and use of central EDW data assets, and 2) establish a campus-wide “information asset” registry and dictionary. This session will report progress to-date and lessons learned related to the processes, people and tools employed in the two parts of this program.    Part 1: EDW metadata.   UW’s EDW is evolving toward self-service BI, based on increasingly robust dimensional models. This progress surfaces the “gap in understanding” between the development team and the end users. The users find it hard to grasp the meaning of the data without well named and clearly defined business metadata.  To bridge this gap the team developed a standard framework to create and manage business metadata for the EDW and BI. The framework includes standardized metadata elements, identified stakeholders and roles, streamlined authoring, approval and publishing processes, and supporting tools. The benefits extend beyond BI to also influence data governance and data quality.  Part 2: Campus metadata.  In comparison with the EDW metadata program, the campus-wide program documents a broader range of “information assets”, including reports, documents, and webpages in addition to database data.  All of these assets can be searched by keywords, synonyms, and other semantically related business terms within the registry. The road to campus metadata maturity is predicted to be long and challenging. The program includes education and outreach activities to the data custodian community, to enlist their help in metadata development and glossary consolidation, and to raise awareness of campus and EDW data and metadata resources.



Derek Messie; Cornell University           

Cornell University Institutional Data Warehouse Technical Architecture and Process Overview               

Cornell University’s Institutional Data Warehouse (IDW) sources data from multiple ERP systems (PeopleSoft, Workday, Kuali), as well as a variety of other campus community sources.  The challenge for the Institutional Intelligence (I-squared) project at Cornell is to integrate data from these various heterogeneous sources and deliver consistent, accurate and secure, cross-functional reporting to senior university leadership.  This presentation will detail the technical architecture and processes involved, from source data replication, to an integrated subject-oriented Institutional Data Store (IDS), through to the IDW star-schemas and conformed dimensions used for executive cross-functional reporting using OBIEE.  Many of the challenges encountered in this project will also be discussed.


Yuko Mulugetta, John White, Amr Mohamed; Ithaca College   

Exploring Social Media Data for the Decision-Making Purpose

Since 2006, Ithaca College has been operating its own social network called “ICPEERS” for the incoming class applicants.  We present how we have explored and utilized the ICPEERS data for the decision-making process and how we plan to use it in the wake of the “Big Data” environment.



Brenda Reeb; University of Rochester  

Business Glossaries – What’s available and how to evaluate it

A business glossary is a metadata tool that publishes data attributes and definitions for a businessuser audience.  The tool improves users’ understanding of data and increases confidence in using data. This session includes an overview of business glossary products on the market, with a focus on functionality that differentiates each product. Learn what to include on an RFP, how to gauge your organization’s maturity for a tool, and how to position a business glossary separately from a technical metadata repository (such as a data dictionary).    Results from an HEDW member survey conducted in January 2013 will highlight how this kind of tool is currently used in higher education.

This program is not an endorsement of any specific business glossary tool.  Emphasis is on a comparative analysis of the tools on the market, a framework for how to shop for one, and usage of these tools among HEDW members. The program will appeal to people who plan to purchase a business glossary tool in the next 12-18 months.

John Rome; Arizona State University    

The Scoop on Hadoop: Making Sense of “Big Data”        

Big data hype is all around us and it’s hard not to jump on the “big data” bandwagon. Academic institutions face numerous challenges analyzing “big data.” The need to transport, process, and extract information from large sets of data, in terms of quantity and complexity, is becoming increasingly critical. Finding meaning from large and often unstructured data is currently difficult and time-consuming, and often outside the purview of traditional higher ed business intelligence departments. This presentation will describe how Arizona State University (ASU) is addressing the “big data” challenge from both an administrative and research standpoint. In addition to defining and describing big data, ASU will show how it is using big data technology and techniques to work with large and often unstructured data and show how it fits within their business intelligence (BI) architecture.



Sonja Schulze and Bodo Rieger; University of Osnabrueck          

Using Semantic Wiki in HE Data Warehousing  

This presentation will illustrate the recently designed prototype SEMUOS, developed at the University of Osnabrueck using a semantic wiki for collaborative knowledge management to support BI users with background information to better understand data warehouse-based applications and share experiences with others. An increasing degree of complexity in reports and the exploding number of analyses available for strategical and operational decision support make it imperative to use a platform to communicate used terms, explain meanings of used facts and make it available to both BI user and developer.  The open source technology Semantic Media Wiki, as an extension to Media Wiki engine, provides a framework for building semantic wiki sites. By adding semantics to wiki sites users can search for information more efficiently with the inbuilt query language e.g. not only to display meta data information about a cube or report but also to ask about the number of dimensions available, to show all reports that use a specific fact or to search for more information and explanations about the dimensions/facts used in a report. Additionally, users can act as authors and share their own experience with analytical reports by easily editing wiki sites in the web browser and directly publishing it to the audience.


Banu Solak; University of Massachusetts            

Graduate Tracking System-The Greatest Data Transformation Ever        

At UMass-Amherst, a system for tracking graduate level students’ retention is implemented for both Master’s and Doctorate students to be used with the Data Warehouse and our BI system. Graduate Tracking Systems require information that goes back more than 10 years. We have PeopleSoft Student Admin System which was implemented in 2003. In this presentation, the implementation method details will be shared with the audience. The obstacles encountered will be discussed and the data warehouse conceptual model will be shared.



Ed Stemmler and William McManus; University of Pennsylvania            

Course Evaluation: From Rag (paper) to Riches 

In 2008, the University of Pennsylvania undertook a change, converting from a paper-based, labor intensive Course Evaluation process to an on-line system. Evaluations of undergraduate courses had been managed by two of the undergrad schools at Penn, results scanned, collated on a mainframe and loaded to Penn’s data warehouse. The on-line project streamlined the process, loading course, student, and instructor data from our student data collection and enabling 10 schools to run their own evaluations. The architecture of the on-line system required some clever revisions to the ETL process and data model, moving from a static format to one supportive of a dynamic format.  The presentation will review the project and goals, the modifications to the legacy reporting schema, the improvements achieved through the dynamic formatting and improved data model and, the results of using on-line evaluations. Penn’s evaluation process continues to expand and evolve, with school, departmental and instructor based questions, something only made possible by the changes to the data model.



Emily Thomas and Ora Fish; New York University            

Academic Department Metrics in Nine Weeks: Lessons Learned             

We will demonstrate an OBIEE dashboard built in nine weeks to profile the resources and activities of science departments and discuss lessons learned from the project. The dashboard content was defined by a provostial advisory group that identified a set of “key metrics for assessing science and technology departments.” The dashboard contains most of those metrics, synthesizing data from multiple institutional systems and departmental input. Administrative politics gave us nine weeks to produce a fully functional dashboard for demonstration at a high-visibility retreat of the university’s executive and academic leadership. We will discuss issues including functional/IT collaboration to meet tight project deadlines, aligning data from multiple systems to departments and sub-departmental units, providing metadata, and facilitating user validation to gain acceptance. We hope the demonstration will stimulate audience discussion about managing warehouse projects and the design and dissemination of dashboards for academic executives.


Lance Tucker and Ravidra Harve; Boston College             

Clash of the Titans – DW vs. MDM          

Two years ago Boston College embarked on a Master Data Management initiative. The purpose of this project was to provide a more hub and spoke methodology for data brokering and to complement our 10 year-old Data Warehouse. This presentation will cover our experience and lessons learned when two development teams tackle a related set of requirements. Special focus will be given to Data Governance and the challenges we faced in defining tools, boundaries and roles between the two projects.



Aaron Walz; University of Illinois

Collaborative BI in a Distributed World


BI can add huge value in Higher Education, but it’s not easy. It requires a high level of collaboration across parts of the organization that don’t normally have to work quite so closely together. Complicating this, in many institutions the work of BI is distributed among many different offices, each with their own staff and their own agenda.   This is certainly true at the University of Illinois. Despite a mature, successful implementation, we still face many challenges. We’ve been successful in bringing more data to more people. An unfortunate outcome of this is many offices establishing their own data kingdoms built on extracts from the Data Warehouse. In the name of self-service BI, we killed the “report fairy” that delivered canned mainframe reports to people’s desks. In fact, self-service BI has been so successful that we now have dozens (if not hundreds) of different versions of the same basic reports that each department builds and maintains for themselves. We need better collaboration!  This session will propose a future state for a collaborative approach to BI in Higher Ed, and offer some thoughts on how to get there. There will also be a group discussion on what does and doesn’t work.



Joanne Wilhelm and Rebecca Cooksey; Indiana University

Implementing Business Intelligence using Microsoft Tools

Indiana University is completing the first phase of our Business Intelligence Initiative including the deployment of the Microsoft BI tool set and a Consolidated Business Intelligence portal. This session identifies the steps taken in our process, from building the foundation of our BI strategy, to deploying BI and analytics.   We will share our approach to the foundational requirements for BI,  including governance, standards, data modeling, and infrastructure. We will highlight the Microsoft tools and the Consolidated Business Intelligence portal,  which enables us to integrate new BI and analytic services along with our existing decision support services and Oracle data warehouse.   We will show dashboards and deliverables in subject areas ranging from finance to enrollment to student success.   Microsoft BI tools used include Dashboard Designer, Reporting Services, and PowerPivot delivered through SharePoint and the portal.   We will share what we have learned and discuss risks and rewards of this strategy.

Elisabetta Zodeiko; Princeton University            

Testing Data, Reports & Patience: How Princeton Got a Grip Testing an Ever-changing BI Environment                

Over eight years, Princeton University’s Information Warehouse has evolved to become the University’s primary B.I. environment, serving Faculty, Students and Administration. Since January 2012, over one million reports have been run by Warehouse Clients, and over fifty-thousand in the month of September, 2012 alone.  During this time, the Warehouse had to survive upgrades and patches, fixes and new releases, new hardware, new operating systems, an expanding architecture, and the occasional natural disaster. Through all of these events, proper testing ensured that our Users never lost the exceptional Warehouse service they had come to expect and enjoy. “How did you do that?” you ask. Very good Testing. And Google.   This presentation explains the various events affecting Princeton’s Warehouse (architectural growth and multiple instances, hardware changes, database software and B.I. patches, fixes, and upgrades, and operating system and E.R.P. patches, fixes and upgrades), our process to determine what should be tested (data, functionality, reports) and who completes the testing (IT or Client).   We’ll share our experience with automating our test plans within our B.I. tool, experiences from past B.I. and PeopleSoft upgrades, and plans for tackling the future PeopleSoft split.   (As a bonus, there will be no Test given after this presentation.)