Save the date! Buckeye DAMA Fourth quarter meeting November 10th

Our speaker will be David Marco.
Watch this space for further details.
When & Where:
Start: 2016/09/22 8:30 AM
End : 2016/09/22 12:00 PM

Expedient Presentation Hall
5000 Arlington Centre Boulevard, Upper Arlington, OH

Third Quarter Meeting – Sept 22


Please join us for our third quarterly meeting of 2016. We have two dynamic presenters queued up for the meeting, with two presentations sure to help with your professional development…

When & Where:
Start: 2016/09/22 8:30 AM
End : 2016/09/22 12:00 PM

Expedient Presentation Hall
5000 Arlington Centre Boulevard, Upper Arlington, OH

We hope to see you there!

NoSQL (What, Where, When, Why, & How)

Different problems require different tools for solving them.  Everyone has a hammer and having a toolbox full of differently sized hammers is not the best approach.  As data needs morph from “a single source of the truth” to “a single source of the truth based on a particular perspective”, understanding the best tool for each of those perspectives is key to providing the best access and understanding to those data needs.  In this presentation, NoSQL (or schema-less) data stores will be discussed and how each of them is targeted towards specific problems.  Additionally, we will examine architectural traits of systems than lend themselves to distributed data and how NoSQL can help implement those types of designs.

About the speaker:
William Klos is a Senior Architect and Centric’s National Cloud Services Lead. Bill’s career has spanned many aspects of computing and at times has architected solutions from the perspective of data, networking, enterprise, and security – but is primarily an application architect. Most recent experience has him providing solutions around Mobility, Cloud, and Big Data architectures as well as API design and development. Bill has been involved with technology since abandoning his desire to be a real architect and stumbling into his first computer science class in 1985. Since then he has typically pushed companies into “what’s next”.


Agile is Hard To Do With Data Warehousing (and Still Worth It)

Agile approaches to software development have flourished under object oriented approaches to software. Being able to deliver business value frequently (monthly, weekly, daily) has been made technically possible due to continuous integration, unit test frameworks, smaller pieces of code, collective code ownership, and frequent refactoring. Moving in the same direction is possible within data warehousing environments, but presents different challenges.
About the speaker:
Mike Kaiser has been involved in Columbus area IT for 15 years. He is an Agile Coach and Data and Analytics Lead at Centric Consulting. He also serves on the board at the Central Ohio Agile Association (COHAA). Mike’s career has been about maximizing business outcomes via IT, and his passion to bring lightweight and collaborative software approaches to teams has resulted from that.

SAVE THE DATE! Next Meeting – September 22

Our next Meetup is currently in the planning stages for September 22.

Our apologies for the delay, but the scheduling didn’t work out for the previously mentioned date.

Watch this space for additional details.

When & Where:
Start: 2016/09/22 8:30 AM
End : 2016/09/22 12:00 PM

Expedient Presentation Hall
5000 Arlington Centre Boulevard, Upper Arlington, OH

Second Quarter Meeting – Hans Hultgren — May 18th

We’ll be meeting at:
Expedient Presentation Hall
5000 Arlington Centre Boulevard
Upper Arlington, OH

Ensemble Modeling
Ensemble Modeling represents a family of modeling forms that share a common purpose and a common modeling paradigm. Ensemble forms address our need for data integration, historization, auditability and modeling agility. Today over 1400 models exist using this technique with the majority leveraging the Data Vault modeling pattern. This session will cover the need, the approach, the underlying premise and the current flavors of Ensemble Modeling. Attendees can expect to understand why organizations should consider Ensemble Modeling for their DWBI program.

Big Data Modeling
Data modeling is generally not a part of our Big Data conversations. The idea of schema-on-read has often been coupled with the idea that modeling is not needed in these deployments. While modeling may be different in the big data world the truth is that it is equally – if not more – important to the organization. This session covers the topic of data modeling in the big data world and will touch on conceptual, logical, and physical modeling considerations. Attendees can expect to learn how and why modeling is a useful and integral part of the organizations big data strategy.

Speaker Bio:
Hans Patrik Hultgren. President at Genesee Academy and a Principal at Top Of Minds AB. Data Warehousing and Business Intelligence educator, author, speaker, and advisor. Hans is currently working on Business Intelligence and Enterprise Data Warehousing (EDW) with a focus on Ensemble Modeling and Data Vault. Primarily in Stockholm, Amsterdam, Denver, Sydney and NYC. Hans published data modeling book “Modeling the Agile Data Warehouse with Data Vault” which is available on Amazon websites in both print and Kindle e-reader versions. Specialties: Information Management and Modeling, Ensemble Modeling, Data Vault Modeling, Agile Data Warehousing, Education, e-Learning, Entrepreneurship and Business Development.

Twitter: gohansgo
LinkedIn: hanshultgren

Buckeye DAMA First Quarter Meeting! Thursday, Feb 25th @ 5000 Arlington Centre Boulevard, Upper Arlington 8:30-11:30AM

Topic: Data-Centric Approach to Enterprise Architecture: Embedding Data Capabilities into the Operating Infrastructure

Abstract: There are many challenges in developing a long-standing “Legacy” Company into a data-driven organization. Companies established in the last ten years are more prepared and likely to have been conceived with a data-centric vision in mind. However, re-orienting how a “legacy” organization thinks about data (e.g. data as an asset), integrating data into business strategies, changing cultural behaviors, changing approaches to application development, and enterprise architecture are some of the most daunting tasks for transforming “legacy” to “data-driven”.

Aligning the data strategy to the business strategy is now commonly accepted but making that a reality can be a daunting journey. When enterprise architecture considers data requirements to the same degree and at the same level as it considers business requirements, data-centric capabilities become embedded as foundational to the designs of enterprise solutions. This means considering the whole entirety of the enterprise architecture, not just the analytics or reporting portions for example, of the enterprise architecture. When data requirements are embedded in application requirements, the organization is forever playing catch-up. Data is “following” instead of “driving”. That is, the business constantly creates work-around solutions and performs non-value-added work to deal with gaps caused by unique and stove-piped data requirements created for each application. These application based data silos create greater risk and prevent the business from rapidly responding to the ever changing market place.

This presentation will provide insights into how enterprise architecture can be leveraged to instantiate a data-centric operating environment. Different enterprise operating models will be presented to demonstrate how the enterprise architecture can apply a data-centric approach to embed data-driven capabilities within the operating environment. Use cases will include a global financial services company and a donor-supported healthcare company.

The presentation will provide the following takeaways for the audience:

  1. Provide a framework for understanding and differentiating enterprise operating models and how they impact the enterprise architecture requirements.
  2. Provide data-driven design principle to be applied to the enterprise architecture.
  3. Provide first-hand experience through two case studies on how a data-driven approach to enterprise architecture can transform an organization and create a competitive advantage.


Presenter: Lewis Broome
Lewis_CroppedLewis Broome is a visionary in operationalizing data management concepts. An innovative and practiced thought-leader in data management, Lewis has more than 20 years of experience successfully designing, managing, implementing and leading global data management and information technology initiatives. He is a sought after speaker in Healthcare, Analytics, Finance, Insurance and the broader data community. He also presents regularly to executive MBA classes on marketing and branding, information management, technology, and general management.

Q4 Meeting: Agile BI with Ralph Hughes

Q4 Meeting: Agile BI with Ralph Hughes

At 8:30AM Thursday, December 10, 2015 Buckeye DAMA is excited to present Ralph Hughes, author of Agile Data Warehousing and his most recent book, Agile Data Warehousing for the Enterprise.   Ralph is a world-renowned leader, TDWI speaker and consultant in the field of data warehousing. We are delighted to bring him in to speak to you.

Seating will be limited so please RSVP well in advance.  If you do not already have a membership (individual or corporate), please review this Membership Options page. Non-members may attend their first meeting at no-charge as long as identification is presented at registration.

OVERVIEW – Agile practices burst onto the scene fifteen years ago.  Since then, industry has seen wide adoption in Application Development.  We now have mature, agile based tools for app dev project management tools, requirements management and SDLC.

Business Intelligence, however, had lagged behind in both tools and methodology.   Ralph Hughes changed that for data warehousing and he’s doing it again for Big Data.  Join us to learn how you can apply agile principles to both of these highly relevant solutions.

TOPIC – During this event Ralph Hughes will provide an overview of the strategies and tools that can make a company’s data warehouse and Big Data projects deliver higher value in a shorter time with far less risk.

It is no surprise that pundits are suggesting that inflated expectations for big data have slid into the trough of disillusionment, when one realizes that half of big data projects proceed without a guiding business plan.  In this presentation, one of the world’s experts on enterprise business analytics will assemble the ingredients that made incremental and iterative development succeed for enterprise data warehousing into a recipe for fast ROI on big data projects.

We’ll discuss the poly-structured data processing tools available today and assess the BI use cases where they fit best.  Next, we will describe how to employ just-enough requirements and progressive solution architectures to drive the risk out of our big data projects.  Finally, we will wrap this new approach with a test-led development strategy that makes big data projects transparent, predictable, and low risk.

The result will be a recipe for your next big data project that ensures fast delivery and  highly valued results without the risk of dead ends or wasted programming.


>> The agile development value cycle for traditional data warehousing.

>> The agile development value cycle adapted for big data projects.

>> Appropriate targets for M/R, Spark, graph models, and document database technologies.

>> Escaping the trap of the “big, scary hardware platform up-front”.

>> Visualizing goals and progress with nightly regression testing and QA automation engines.

SPEAKER BIO – Ralph Hughes serves as Chief Systems Architect for Ceregenics, a Denver data analytics consulting firm.  He has been building data warehouses since the mid-1980s, starting with mainframe computers, and has led numerous BI programs and projects for Fortune 500 companies in aerospace, telecom, government, and life sciences.

He authored the industry’s first book agile data management, and his third book, “Agile Data Warehousing for the Enterprise,” was just released in October 2015.  Ralph is a faculty member at The Data Warehousing Institute, a certified Scrum Master, PMI Project Management Professional, and has coached over 1,200 BI professionals worldwide in the discipline of incremental and iterative delivery of large data management systems.

Ralph holds BA and MA degrees from Stanford University in computer modeling and econometric forecasting.


Our Third Quarter Meeting for 2015 is on!

Thanks for your patience – we had some scheduling difficulties but it’s all sorted out and we’ve got a great presentation for you on a data topic you requested via our last topic survey.

Kevin Cartier and Jeffrey Anderson of Vertex Computer Systems will present “The Outer Limits of Data Science”, a survey of big data challenges known to test the limits of existing modeling techniques & information theory. In addition, they will conduct a review of the essential activities & procedures of data science methodology, with particular attention paid to their respective roles in coping with the various problem types associated with big data.

The session will wrap up with a holistic conversation about how to move from the Want to stage of applying data science to big data challenges to Acting on, putting the focus is on the necessary enablers and support elements for impactful data science and advanced analytics teams within your organization.

We hope to see you there!

Monday, October 5th 2015
8:30 AM – Noon


Jeffrey Anderson is a senior technical architect and strategist in system design and delivery for a variety of Fortune 100 companies in high-technology, banking, and retail – consumer goods, manufacturing, and healthcare with a strong focus on traditional Enterprise Data Management and emerging analytic tools and techniques. Jeffrey has designed and supported systems at extreme scale (volume and responsiveness) specializing in transformative projects for retail and banking domains. Over the years Jeffrey has been a trainer and speaker in the US and internationally on topics ranging from data integration techniques, collaborative analytics, enterprise and project architecture, to identity management and master data governance.

Kevin Cartier is an IT professional with over 20 years of software development experience across a wide range of industries including finance, medicine, marketing, aerospace, energy and manufacturing. Specific project types included database systems, embedded controllers, back-end interfaces, network interfaces, compilers and end-user applications. He holds advanced degrees in both computer science & bio-statistics. As part-time instructor, he taught undergraduate courses in C/C++ Programming, Objected Oriented Design, VB/Access programming, Data Structures, Systems Analysis and Ethics in Computing. While a project manager at Case Western Reserve University, he successfully led development of a 500,000 LOC C++ project in human genomics. During that time he acquired first-hand exposure to emergent machine learning methods and big data technologies – such as Hadoop — that are increasingly applied to business challenges today. Kevin’s primary expertise is in quantitative analysis and translation of statistical methods into code for mission critical systems. He is proficient in the Python & R languages for data manipulation & visualization and is fully versed in current data science methods, techniques & tools.