Category Archives: Assessments

That Time Your Lawyer Was Glad You Used A Hiring Assessment

Published with the permission of OMS Distributor Harry Lakin, Founder of Hire Capacity

Inevitably, if you’re in business long enough, it’s going to happen to you.

You will pass on a candidate and it’s going to tick them off to such an extent that they’re going to threaten to sue you.

They may claim racism, sexism, ageism or a host of other “ism’s”.

What do you do?

Even if you’ve been extra thorough about interviewing, resume investigating and reference checking, you know, some attorney somewhere will be willing to take the case.

You and your company are not racist or sexist or ageist…but that really doesn’t matter now does it? This person and their mouthpiece want their pound of flesh, er…payday!

Are you paying close attention, because I’m about to give you your “Get Out Of Jail Free Card”.

It’s imperative that you use an objective NORMATIVE hiring assessment as a regular part of your hiring process.

Below is an example of the output from our OMS Assessment. It compares a candidate’s behaviors to those from our JAX Job Model (name redacted). Can you tell from this graph if the candidate is man or woman? Can you tell if they are Asian or Anglo or Native American? Can you tell how old they are? Can you tell if they’re in a wheelchair, or blind?

The answers are most definitely no.

The only information one can discern is about them behaviorally. And, the right behaviors are something you can and should be looking for from the best candidates for your positions.

Further, so long as you’ve clearly created a job model that the candidate has been reliably measured against, you can honestly (and easily) say they’re either a fit or not…and that other candidates (i.e. the ones you progressed with) were a better fit. Add to the mix solid statistical validation and their “ism” assertions become weaker by the minute.

In the instance above, the candidate is in red and the job model required behavioral traits are in black. Clearly, as anyone can see, this candidate is not a fit for this role.

Don’t get me wrong. They may still sue you, but the more arrows in the quiver that’s your defense, the better off you’ll be in front of a judge or jury (should things progress that far).

All of this, comes on top of the added BONUS you’ll get by having a phenomenal way to tell which candidates you should actually BE hiring.

That last sentence there is a feeble attempt at tongue in cheek humor. Of course you should implement a sound assessment strategy as part of your selection criteria for actual hiring.

But, If a great assessment tool has the added benefit of being your out, when a disgruntled candidate starts rattling sabers…

Well, what’s piece of mind worth?

Sorting Out the Junk Science in Psychometrics

Valid psychometrics are highly beneficial tools. By helping us objectively measure and describe human attributes, they offer us important insights into candidates and employees, which are otherwise very difficult to pull from either interviews or work observations. By comparing test results to accurate job profiles or benchmarks, potential context-relevant job behaviors can also be predicted. And if those psychometric tools use a normative design to compare respondents to a broader population, as they should do if used for decision-making applications, they further enable us to more precisely compare candidates.

Today, under the growing influence of big data and with AI (machine learning) knocking on the door, it is not surprising that some test vendors are seeking competitive advantage with these and other fascinating scientific developments. Behavioral economics, being all about numbers and predictability, certainly holds the promise of some exciting changes in psychometrics, but it is important that we distinguish what are real advances from the mere illusions of junk science.

One constant over the years is that we all look for silver bullets to simplify our decisions. However, not all decisions can be simplified and not all simplifying solutions are what they are touted to be. We live in a world of exaggerated claims and all sorts of products and services fall short of their marketing hype. In some areas that’s no big deal, but with people decisions, when careers and management plans are on the line, the consequences and costs can be very significant.

The issue is all about prediction. How precisely and accurately can we use a psychometric to predict job fit, behavioral differences among people, or job performance. Since more and more test vendors are claiming they can provide these answers, it’s worthwhile to take a hard look at the veracity of these claims.

Junk Science in Psychometrics

Actually, junk science crept into the field of psychometrics many years ago, but we just never called it that. The most obvious example is the deceptiveness of face validity. Any vendor website insinuating that a sampling of test respondents who are in high agreement with their test results connotes meaningful validity is either ignorant or deceptive. Reading the works of Dan Ariely or Daniel Kahneman on the irrationality of behavior will make it very clear that validity has no relationship with agreement or personal likes and dislikes. Validity is a statistical measure not an emotional one, and emphasizing face validity, which is not validity at all, likely indicates that the vendors don’t have real validation or certainly don’t want you to see what numbers they do have. Take a pass!

Another scientific stretch has to do with the job profiles that many tests use in a rather absolutist way. Job profiles are critical to accurate decision-making, but with too many tests, a simple set of generic templates or stereotypes replace actual job analysis. Having undertaken literally thousands of job analyses over the years, many employing content and criterion validation methodology, there’s simply no question that one size does not fit all.

Job Analysis Should Be a Context-Relevant Process

For the more complex jobs for which behavioral assessments are generally used, job analysis should be a context-relevant process taking into account unique situational variables, for example, the personalities of the “boss” and the other people involved, along with variations in cultures, performance standards, job expectations, management styles, training, quality of supervision, etc. In our experience, we have found numerous instances where, because of such variabilities, seemingly similar positions in different organizations required very different personalities. Granted, some jobs can be cloned, but they tend to be task and specialist roles and/or entry level, including service functions, non-transactional retail sales, data entry, and reporting roles, etc.

Getting the job right – understanding both nuanced and unique factors that drive performance – is at least 50% of a selection decision. But all too frequently, simple assumptions and inference supplant what should be a thorough analytical process, resulting in inaccurate job profiles that lead to flawed candidate searches.

Test Design Matters

Fitness for purpose is yet another area where claims are sometimes misleading. Such is the case with normative and ipsative tests. Simply stated, a normative instrument uses a questioning format (for example, yes and no response variations) that enables norms for some population to be developed and individual responses compared to those norms. One candidate might score at the 80th percentile in a certain construct and another at the 40th percentile, so we have a statistical basis for comparing the two people.  The variance in their scores further provides us with a means of determining how their behaviors would differ in specific situations. Since the intent in using a psychometric in decision-making applications is to objectively compare people, this manner of test design is essential in applications such as hiring, internal placements, and succession planning.

Ipsative instruments use a different questioning format and are intended for different purposes. Using variations of forced choice questions (for example, Most or Least like me), MBTI, and the many versions of DISC, provide only a relative indication of traits or attributes as opposed to a score measured against a statistical norm. Ipsative means self-referent, which translates to using oneself rather than others or a defined population as a norm. So, although ipsative tests indicate how one individual prefers to respond to problems or people, etc., they offer no meaningful correlation of comparative strength or visibility of traits when attempting to compare that person to another. If a respondent scores high in dominance, for instance, that simply means that dominance is a more prominent behavioral factor than the person’s other traits, but it cannot be said that the person is more or less dominant than someone else with a similar test configuration.

Suitable for coaching or other self-awareness applications where comparisons to others are unnecessary, ipsative tools are neither designed for nor adequate for decision-making purposes like hiring. But in the marketplace, it’s a case of the blind leading the blind: many vendors either do not understand or choose to just ignore the limitations, and buyers do not really understand what they are buying or using.

Knowing how excited people are to get on the big data train and find that silver bullet, the latest trend is for vendors to attempt to translate what is essentially descriptive data into a single, simplifying comparative number. One vendor, for example, claims that they can provide a number score showing how each candidate compares to the job. It sounds good, and it may attract some buyers, but claiming a degree of predictive precision that is not psychometrically possible is a real stretch of the imagination.

False Assumptions Result in Inaccurate Results

The problem is with the assumptions that are being made about the data being used, all of which have no margin for error. The first assumption is that the job benchmark is accurate and complete. We know that if it’s a standard job template or a stereotype rather than a context-relevant creation, the target is questionable and might even be way off the mark. How meaningful is a predictive value if the candidates are being compared to the wrong target information?

The second assumption, also about the job profile being used, is that in its entirety it captures what is behaviorally significant in the job. The reality is that this is very unlikely. Over the years we have undertaken scores of criterion studies on diverse jobs and in these analyses, we correlate test constructs with the objective performance data for a group of people. We generally find anywhere from one or two to maybe a handful of statistically significant correlations out of almost 50 possibilities. So, whereas individual traits or combinations of several traits may be predictive of some aspects of performance, the entire personality syndrome is not. Thus, comparing a candidate’s test results to a behavioral profile, even if it is accurate, means comparing characteristics that may have little or no relevance to actual performance, and which may actually run counter to the several characteristics that actually do matter. The bottom line is that the predictive value assigned to that candidate’s results may be attenuated by other characteristics that have little bearing upon job performance!

The third assumption tends to gloss over the fact that psychometrics, at best, is an imperfect science, and there are practical limits to what can be predicted and how precise the prediction can be. Start with the well-accepted general assumption that behavior (traits, whatever) account for maybe 40% of performance variance in most jobs. That variance factor can be lower in some jobs, for example a nuclear physicist, and higher in others, for example retail sales. So, behavior is an important decision-making consideration, but it cannot stand on its own.  Even the most positive or negative potential effects can be countered by such factors as knowledge and skill, cognitive ability or intelligence, attitudes, as well as physical and emotional constraints. Factor in the effects of randomness, which is always a consideration in measuring human abilities, and you then realize how unrealistic and unstable any specific number might be. A more plausible approach would be to use ranges of compatibility, for example, highly compatible or low compatibility, because that is about as close to the target as you can reasonably get.

As I stated right at the top of this piece, psychometrics can be very informative and very beneficial in so many applications, but they need to be used the way they are intended to be used and within a framework of reasonable expectations.

With OMS you can gain a competitive edge by combining personal decision-making skills and know-how, scientific measurement techniques, and web-based organizational diagnostic tools into a comprehensive decision-making system for all your managers. With OMS your executive team can develop strategic initiatives far more likely to succeed, and make faster, better-informed operating decisions leading to higher individual and group performance, greater retention, and lower costs. Learn more: http://2oms.com/start/

The eight week career, or Why do so many call center sales people leave so quickly?

This article, by Harry Lakin, was originally published on LinkedIn.

Very, very long story short, I was looking through the applicant/assessment data base for a client of mine as part of a project when I came across a name that I recognized. The name, I believed, belonged to one of my eldest son’s childhood friends. It’s an unusual enough name that I thought I’d do a bit of digging on LinkedIn to see if this young man was actually now working for my client.

As a result of my snooping (bear in mind, I did not actually contact him directly), the young man sent me a connection request via LinkedIn, and we proceeded to have a bit of an email exchange.

Turns out this fellow did not work for my client but the subsequent conversation that came from this trip down memory lane could only be described as “eye opening.”

Seems this fellow had applied with my client to be a “sales manager.” To be honest, the behavioral traits of this young man, as revealed by assessment, were not aligned with those that my client has defined as necessary for success in their sales role and so they’d passed on him.

But what he shared next (unprompted mind you), really blew me away. The names have been changed to protect the innocent but otherwise, the quote is verbatim.

XYZ seemed like a really solid company, but had they chosen to hire me I probably wouldn’t have stayed long because I am itching to get back into the health & wellness industry. I am actually meeting with the Regional Director of ABC Gym for a GM spot. I managed a number of 123 Fitness locations for the 1st 1/2 of my 20’s and have missed the industry ever since I left in 2011.”

Am I living in a parallel universe?

Mind blown…!

Let’s dissect his statement:

1.      This young man was applying for a job he really wasn’t all that interested in because he’d really have preferred something else….in an entirely different industry!

2.      He implies that if he’d have been offered the job he’d have taken it.

3.      Had he hired on, he immediately knew he wasn’t going to be there long.

4.      He had an idea in which direction he actually wanted his career to go and was pursuing that path simultaneously.

Now let’s take a deeper dive:

First off, there is absolutely nothing wrong with candidate’s pursuing multiple opportunities simultaneously. In fact he’d have been remiss not to. And, it’s truly a good thing that he had an idea of the industry and role he really wanted. We should always pursue our passions, particularly when they mesh with our innate traits.

But, that’s the only part of his statement that makes a lick of sense to me and sadly, this young man’s thought process is not unique.

Before, I go any further, let me state for the record, this fellow is a millennial. and if this sounds eerily like what Simon Sinek discusses in his recent and oft shared video about millennials that’s making the rounds, don’t shoot the messenger!

You have people, applying to work at your company today, that are of the same mind set of this young man. Your business (and call centers in particular) is nothing more than a stepping stone along the path to their next job or their “dream” career.

Frank Gump’s excellent article “De-Linking Call Center Performance and Turnover” posits that nobody ever went to work in a call center because it was their lifelong dream to do so.

My client dodged a bullet, as the assessment was the thing that weeded this guy out. That’s not to say however that every assessments catches every “short term” applicant. Some will manage to sneak by.

What would hiring this guy have cost my client? There is a significant hard cost to hiring a candidate, not to mention the soft cost of “opportunity lost.” Further, what does turnover do the “morale” of any business?

So, what’s a well intentioned call center manager or business owner to do?

A. Be honest with yourself. Accept the fact that the job you are trying to fill may be just that, a job…and not a career. Too often, what we believe in our own minds is vastly different than reality. Don’t get lost in the sauce.

B. Do everything you can to maximize employee performance while they are working for you. Knowing they may be with you for only a short while…try to get the most out of them while they’re there. To do this first create a job model of what the ideal candidate looks like within the role – within your business. The emphasis is on YOUR. Don’t hire based upon what you think your competitors are hiring. Be diligent and strict that those you do hire adhere to your job model. This is a must even if the pressure is severe to put butts in seats. Doing otherwise will diminish return.

C. Get creative. Find new ways to engage the best of the best that end up coming to work for you. Think about real career paths and make it known to those you hire that exceptional performance will be rewarded with the opportunity for a career not just a job.

You’re never going to reduce turnover to zero. What you need to do is figure out what is an acceptable number for your industry or company is and then do everything you can to be beat it. Be vigilant about hiring the RIGHT people for your roles and let the chips fall where they may.

About Hire Capacity

Hire Capacity is a foremost company in the Hiring Assessment Space. Using the Organizational Management System, Hire Capacity helps organizations develop hiring selection and retention strategies. Learn more here.

How to Use Personality Tests for Employee Selection and Rejection

rejectedIn my experience, most users of personality tests aren’t aware of the practical distinction between using these tests to select candidates and using tests to reject them. In fact, selection and rejection are very different testing stratagems and, for both decision-making and legal reasons, users need to be quite clear on what those differences are.

Candidate selection is what most people associate with testing, but the reality is that tests do a better job of signaling who is unlikely to perform well in a job than they do of predicting who will likely succeed. Even one behavioral factor, by its presence or absence, might greatly increase the odds of job failure, but it rarely works the other way around. Complex roles are dependent upon so many diverse variables that no one test can accurately predict high-level performance on a sustained basis.

Using a test as a method of rejection means employing it in a frontline screening role where a cut-off score of some type determines who gets weeded out and who continues through the evaluation process. This is ideal for high-volume hiring situations where speed is essential. To some people, this seems terribly unfair or, at the very least, impersonal. Quite to the contrary, so long as the organization undertakes an objective, statistically based study of the job, preferably one in compliance with EEOC guidelines for such studies, it is very fair. Personality tests have not been found to produce an adverse impact on any protected group, so tests do not cull out candidates in any disproportionate way, and they provide an objective, as opposed to a subjective, means of differentiating job-performance potential.

The critical consideration for a rejection process to work is the quality of the job analysis. Sometimes less formal, but statistically sound methods work.  For example, accomplishing an indiscriminate group analysis of top and bottom performers in a job, but only where clear behavioral or trait differences exist between the two groups, can be an effective means of measuring candidate potential. Better still is an in-depth statistical study that generates at least one or more statistically significant correlations between the test measures and actual job performance outcomes. Such a study might show that sales revenues increase in unison with increases in assertiveness or initiative, in which case, candidates with more of those qualities will be more likely to generate higher sales.

Without rigorous job analysis, even ignoring possible legal concerns, the screening process could produce false positives, filtering out potentially higher performers in lieu of those with less likelihood to succeed. As implied above, when used in isolation, tests can only in the broadest way predict performance. In reality, no tool or process unilaterally has that type of predictive capability.

Alternatively, employing valid personality tests at the selection stage in conjunction with job benchmarks should enable users to objectively extract talents, motivational factors, as well as job-related behaviors that can only be inferred from interviews, yet with more precise calibration. One of the nuances of understanding personality is that the degree of various traits a candidate possesses is as crucial a consideration as the traits themselves! In some jobs, this gradation can be the one differentiating variable between top and middling performance.

At the selection stage, tests should also help validate and amplify what is learned from other tools and procedures, such as simulations, reference checks, résumés, and interviews. Effective assessment should leave no loose ends, so tests should either confirm insights from these sources or raise red flags where inconsistencies are evident.

Hiring is much like gambling: It’s always a game of odds. The issue is how to shift those odds more consistently in your favor. Tests can do that, but you have to be realistic in your expectations and know what you want testing to contribute to the overall assessment process. Even shifting those odds a couple points can have a tremendous accumulative effect on improving hiring success and job performance, and minimizing hiring and onboarding costs.

__________________________________________________________________________13259f4For more than forty years, Frank Gump has been helping corporations become more productive and profitable by helping management teams identify and hire top performers and manage them most effectively. Developed and refined through extensive experience in more than 1200 organizations in the United States, Canada, England, and Australia, ADGI’s Organizational Management System (OMS) is a finely calibrated, technologically advanced decision-making process offering the potential for enormous payback. Contact ADGI for more insight and connect with Frank on LinkedIn. Follow ADGI on Twitter @ADGIGroup. Like ADGI on Facebook and follow us on Google+.

Picking a Test that Works and Suits Your Needs, Part 2

Furthering the discussion regarding test validity and reliability, here is a final tip to assist you with your analysis.

Avoid getting hoodwinked

testing mrgAs a reminder, a test should be able to predict in a statistically significant way performance differences among people or some performance outcome. Validity is always a statistical determination and never a subjective one. What is called face validity is not validity in the true sense of the word, but is really more akin to Facebook Likes and Dislikes. You should be justifiably cautious of any test that makes a claim, such as “89% of those who received feedback said the results described them accurately,” particularly if no specific statistical data is also provided. A test is not valid simply because people like what it says about them.

Validity and reliability are expressed as correlation coefficients, which essentially mean the extent to which two things move in unison and which evidence a cause–and-effect relationship. For example, in the first two years of life, we would expect to see a high correlation between the weight and height of babies. Correlations express likelihood – the extent to which one variable likely influences something else. So, if a vendor tries to explain validity in some other way, for example as an accuracy percentage, there is simply no scientific basis for that. It’s baloney.

As noted above, in this era of big data, spin is becoming more prevalent, and you need to watch out for it. As an example, in measuring test reliability, the generally-accepted cutoff for a trait scale would be a .70 correlation. The higher the correlation, the greater the reliability, so .85 is a lot better. Tests have multiple scales, so if one falls slightly below .70, that does not nullify the value of the test or mean that it shouldn’t be used. It simply means that specific scale should be treated more cautiously. The spin angle is apparent today with several instruments that have numerous scales that fall well below the traditional cutoff. The reality is that the scales are weak and their value is questionable. One vendor in particular is using a white paper to rationalize many weak scales by claiming new and more subjective measures of reliability make the .70 threshold less meaningful. That’s obfuscation by complexity just to defend something that may be indefensible.  If you drill into their literature and see scales where r=.55 or something similar, understand that the scale is weak and a poor measure of whatever it’s attempting to identify.

Follow up?

There’s much more to understanding all the considerations of test construction and validation than what can be covered in the space of two blogs, but as they say, this is a start. Please email me at fgump@2oms.com with questions, or comment below. You can also reach us on Twitter at @ADGIGroup or on Facebook.

__________________________________________________________________________13259f4For more than forty years, Frank Gump has been helping corporations become more productive and profitable by helping management teams identify and hire top performers and manage them most effectively. Developed and refined through extensive experience in more than 1200 organizations in the United States, Canada, England, and Australia, ADGI’s Organizational Management System (OMS) is a finely calibrated, technologically advanced decision-making process offering the potential for enormous payback. Contact ADGI for more insight and connect with Frank on LinkedIn. Follow ADGI on Twitter @ADGIGroup. Like ADGI on Facebook and follow us on Google+.

Picking a Test that Works and Suits Your Needs, Part 1

test mrg Trying to navigate through test validity and reliability is a jungle! After reviewing a myriad of validation claims over the years, you begin to realize that truth is sometimes hard to find.

Here are a couple of tips to help you with any investigation you might want to do. Remember, the goal of any test is to add situationally-relevant insight, so if it doesn’t do that, you need to move on to something that will.

“Frankly, I’m shocked!”

Whereas claims from some test vendors are straightforward, others are disingenuous. Some vendors simply make claims with no supporting documentation, others publish weighty tomes with irrelevant content in the belief that people will associate truth with weight and technical complexity, and still others try to support their claims with nonsensical information. And now there’s a new twist: Some vendors are trying to reframe accepted measures of validity or reliability to make their instruments look better than they really are. Lipstick on a pig? Sure sounds like it…

Pick the right tool for what you want to do.

If you are going to use a behavioral assessment, you first need to make sure that you are selecting the right type of instrument for your needs. There are two types of tests to choose from: a normative design and an ipsative design. A normative test is intended for decision making, because it compares individuals to a work group or a defined population and allows individuals to be compared to one another. In other words, when you’re trying to determine whether or not a new hire fits your company culture, this is the best option In contrast, ipsative instruments are most appropriate for personal discovery or group-understanding applications where people are not compared with one another and decisions are unnecessary. Such tests are based upon self-referent measures of relative behaviors and strengths and don’t offer a meaningful basis for comparing people. Ipsative tests are primarily intended for coaches and trainers who are trying to identify the talents of their clients and teams.

Although some vendors of ipsative instruments point out the purposes and limitations of their test design, others don’t. Here’s where the spin comes in: At least one vendor goes so far as to claim that, because they have more than 10 scales, their results approximate those of a normative test, which then begs the question: Why not just use a normative test rather than a wannabe?

The bottom line is: Don’t get blinded by brand or fooled by spin. Find out which tools are appropriate for your applications and information needs.

Understand what to look for.

Validity and reliability in a business decision-making context are really very simple:

A test or instrument should measure what it claims to measure, which is called construct validity. For example, if a test measures social initiative and friendliness, does it accurately distinguish between those who more sociable and those who are not?

A test should show evidence that the scales have internal consistency and that repeated test results are consistent. This is reliability. If that test supposedly measuring social initiative shows different results over several administrations, then it’s really not measuring anything.

Finally, a test should be able to predict in a statistically significant way some performance outcome. This is criterion or predictive validity. If you are using a test to make more placement decisions, then more accurately predicting performance or some dimension of performance is the goal.

In the next post, you can learn how to avoid statistical data spins.

__________________________________________________________________________13259f4For more than forty years, Frank Gump has been helping corporations become more productive and profitable by helping management teams identify and hire top performers and manage them most effectively. Developed and refined through extensive experience in more than 1200 organizations in the United States, Canada, England, and Australia, ADGI’s Organizational Management System (OMS) is a finely calibrated, technologically advanced decision-making process offering the potential for enormous payback. Contact ADGI for more insight and connect with Frank on LinkedIn. Follow ADGI on Twitter @ADGIGroup. Like ADGI on Facebook and follow us on Google+.

How to Get Real Payback from Behavioral Assessment

measurementAre you using some sort of testing to assess candidate or employee behaviors? Recent studies indicate that between 20 and 33 percent of employers now use behavioral tests and diagnostics, so it’s worthwhile to ask, as Dr. Phil would say, “How’s that working for you?”

For more than 40 years we have been observing how companies use tests and assessments, and it’s our belief that most organizations really don’t know how well their assessment tools are working. Of course, this is true for a lot of what goes on in HR, because a lot of what goes on is, in fact, difficult to isolate and measure in it’s own right.

But measuring the operational and financial impact of testing is essentially just keeping score. What really matters more is how organizations implement and manage the assessments they use. In a series of subsequent articles, I will uncover for you what we believe are the six most critical reasons why behavioral testing fails to deliver and fails to meet what should be reasonable expectations in any organization. Address these, and testing can work for you!

Where are we going to go with this series? We’ll take a look at:

  • The importance of interpreting the validity and reliability of data
  • How test proliferation and commoditization lowers user expectations
  • Why a poor understanding and use of job analysis guts the assessment process
  • What both HR and operational managers need to know about workplace behavior
  • What test users should be able to do with technology and data analysis
  • How weak command and control can cause unnecessary problems and undermine the potential value of assessment

Stay with us and let us know your experiences.

__________________________________________________________________________13259f4For more than forty years, Frank Gump has been helping corporations become more productive and profitable by helping management teams identify and hire top performers and manage them most effectively. Developed and refined through extensive experience in more than 1200 organizations in the United States, Canada, England, and Australia, ADGI’s Organizational Management System (OMS) is a finely calibrated, technologically advanced decision-making process offering the potential for enormous payback. Contact ADGI for more insight and connect with Frank on LinkedIn. Follow ADGI on Twitter @ADGIGroup. Like ADGI on Facebook and follow us on Google+.