LMS Data import issues
When implementing an LMS, the existing foundation data (Organizations, Job Types, Locations, People), and training history (courses assigned, attended, completed, etc.), have to be imported into the new LMS. These records, often recorded in several different formats (Spreadsheets, Data Bases in Access, Filemaker, etc., accounting system, HRIS, existing LMS ), can create a barrier to a quick and clean transfer to your new system:
- The existing set of data are not adequately formatted for reporting purposes (some data not available, recorded in different data bases, different measurement units, etc. ).
- Some data is not suited to their processing by an LMS; job types titles are not labeled to suit competency management needs (titles too broad or too specific, some are redundant, too many titles), and therefore cannot be matched with appropriate competencies or courses.
- Existing data is not adequately formatted to suit the new LMS import data requirements (variable types, number of characters in a data field, etc.).
The consequences of data incompatibility
A lot of time and resources has to be invested in order to:
- Transform the data so that it is useful in an LMS to push out training and to generate meaningful reports.
- Test the data import in the LMS. During these tests, failures are frequent due to incompatibilities between existing records and the LMS’ requirements.
When data imports fail, one often has to:
- sift through dense and cryptic error reports to decipher error codes,
- inspect the data import files in oder to uncover the cause of the error(s), and,
- manually make the necessary modifications in the data files, repeating the entire process until the import succeeds.
In some cases data extraction and conversion or reformatting for a new LMS can require the development of a custom software application.
The work involved to prepare existing user, corporate hierarchy and training history data for LMS import, can often take many days if not weeks.
These unexpected delays can setback the LMS Go Live date, and can create time consuming procedures for scheduled updates. These non planned, additional burdens, can jeopardize the capacity to provision the LMS on time and on budget.
ADTransform, the Artifact Data Transformer, is a lightweight, configurable solution that sits in the workflow between two or more information systems (e.g: an HRIS system and an LMS), converting data from one system to fit the import requirements of the other.
ADTransform has a plug-in architecture that is configured to accept data, to transform it to meet a set of requirements, to validate the resulting data streams and to output the resulting data.
It is dynamically configured so that any combination of modules can be configured into it to perform many different types of data manipulations.
It can be used to take multiple data input sources, perform transformations, apply multiple validation rules to the resulting data structures before the data streams are output in the desired format.
For example, ADTransform can be used to read information about people from a Human Resource Information System, read organization, job types and locations from spreadsheets, map job titles to job types, validate the resulting four data streams and output them in the format required by an LMS.
The ADTransform is constructed as a workflow engine with each stage consisting of multiple plug-ins. Each plugin performs a single, discret task, can be used in any stage and can be used as often as required. The configuration of a specific plug-in is tied to the individual occurance in the workflow so the same plug-in can be configured to perform slightly different operations each time.
Example stages and descriptions
- Pre-process – set defaults, get files from remote sites, prepare files and directories
- Input – acquire the raw data and convert it into internal data streams
- Transform – apply the required transformations
- Validation – Validate the final data streams
- Output – Write the resulting streams in the required formats
- Post-process – send output files to remote sites, clean up directories and files, prepare exception reports, log and audit trails, e-mail results
The plug-ins are configured outside the code for ease of use and robustness. It also allows custom plug-ins to be added to the workflow engine without modifying existing code.
Input and Output Data Types
ADTransform can be configured to run multiple plug-ins simultaneously during each phase.
Input and output modules are independent plug-ins, so that plug-ins can support many types of data sources and destinations:
- Can read and write different file types : Excel spreadsheets, CSV, XML, etc.
- Direct Database access – SQL, etc.
- WebServices and/or APIs can be used to read and write data.
- Provided with the API documentation for writing new plug-ins so you can add in anything not included initially.
- Combination of above where some of the information is in static files and some in databases or external systems.
ADTransform is highly versatile and easily configurable in order to meet the most complex data transformation needs.
ADTransform has better validation that some of the importing systems which often throw internal error messages that are hard, in fact almost impossible to decipher.
For example, if the ADTransform finds a duplicate key, it will tell you both of the rows where identical keys occured rather than just putting out an “invalid duplicate key found or “transaction aborted” message.
It can do table lookups to map positions or job titles (on business cards) to job types (more appropriate labels for training management purposes), or to turn department accounting codes to locations or organizations suitable for training.
It can produce an audit trail of all of the activities and a separate error log of any invalid data found during its processing.
Reporting: Using a free open source reporting engine (Jasper Reports), it can produce reports on the data flowing through and can e-mail these to specified group of recipients.
This can save a lot of headaches (time and money as well) during implementation where you often find that the data in the HRIS system may be very accurate about the information required to pay someone, but not so good on other things (valid manager for example) that the LMS cares about.
There are many situations where integrating and manipulating data flows are required.
Use cases occur in Learning Management. as well as logging and reporting of Call Details for PBX systems.
There are other Use Cases that are disussed in the General ADT Use Case section.
Current release : Version 2.0
It includes a User Manual for System Administrators. This includes all of the information required to install and run a configured ADTransform as well as the information required to configure a new workflow.
ADTransform replaces the creation of a custom program or complex data extraction by the customer’s IT department.
It can do much more comprehensive and understandable validation.
It can be configured without programming and can be extended through custom plug-ins that can be added through configuration files.