Employment Screening

Explore top LinkedIn content from expert professionals.

  • View profile for Anupam Mittal
    Anupam Mittal Anupam Mittal is an Influencer

    Founder & CEO @ People Group | Tech & D2C Builder & Investor 🦈 @Shark Tank India

    1,650,820 followers

    Most people get Reference Checks wrong! Here's how to get them right 👉🏻 Throughout my journey, I've had to make 1000s of hires and often struggled with evaluation through the standard interviewing processes. I read somewhere that ~60% senior hires go wrong even after the most meticulous processes so I wondered how to improve the odds. 🤔 What I discovered is that there's no substitute for spending time with the candidates and conducting ‘unnamed’ ref checks through your own network. But what I also learnt is that not every ref check is the same and you can end up with very different outcomes depending on how it’s done. So, through reading and experience, I came with the best practices that I christened with the acronym "PEARL", and here it is for the FIRST time🔥 P - Promise Reciprocity Busy professionals don't dole out intel freely. So, you must offer to return the favor – something as simple as “If ever you need my help for a ref check or otherwise, I'd be happy to help". A senior leader will immediately see its value & perhaps become more ‘available’ on the call. E - Ensure Confidentiality This is critical, especially in India. Candor is not part of our culture, so assure the referrer that you understand the sensitivity of this call and will keep it 100% confidential. Also that you'd expect the same if they ever choose to call you for a reference. If you still sense some hesitancy, maybe throw an ‘offer’ of a good-faith NDA. Don’t worry, nobody ever takes it up but it makes them less guarded. A - Ask questions that force specificity (close-ended & open-ended) Broad questions like – "How was their work ethic?" “Does she work hard?” - are a complete waste of time. You need to ask 2nd order questions that make it comfortable for the referrer to answer without feeling like they're maligning the candidate. For eg - “How do you think we can help the candidate grow?" is better than "Can you tell me about their weaknesses?” R - Retrieve critical insights Actively listen and probe for specifics. Did the candidate consistently meet deadlines? Why or why not? How did they handle pressure? Did they run towards solving problems or look for directions to carry out? These details paint a picture beyond the resume. L - Learn rehire potential And finally, the golden question – "Are you willing to re-hire or work with the candidate again? Why or why not?" Regardless of what the referrer may have said up to this point, most senior folks will have a hard-time giving you a false or misleading response to this one. This is the true gauge of the candidate’s potential and one I put a lot of weight in. To conclude, thank the referrer for their time, assure confidentiality again and commit to a quid pro quo. This leaves the door open for other ref checks you might wish to do in the future 😏 So, there you have it - A PEARL from my collection🙌🏻 Do comment with something that’s worked for you that I may have missed :) #hiring #startups #leadership

  • View profile for Shreya Mehta 🚀

    Recruiter | Professional Growth Coach | Ex-Amazon | Ex-Microsoft | Helping Job Seekers succeed with actionable Job Search Strategies, LinkedIn Strategies,Interview Preparation and more

    133,455 followers

    I’ve reviewed 500+ applications as a recruiter at Amazon, Microsoft, and TikTok. This is the kind of resume that gets rejected in 3 seconds. I'll break down why such resumes fail to create an impact and how you can avoid such mistakes. Problem 1: Too much, too soon Two degrees, 15+ courses, and 30+ tools listed - all in the top half. Recruiters don’t need a tech stack dump upfront. Instead: ➡️ Start with a skills summary tied to impact-driven achievements. ➡️ Highlight tools you’ve mastered, not dabbled in. Problem 2: Responsibilities ≠ results Worked with IT to maintain PC and network health. Okay... but how did it matter? Reduced downtime? Saved costs? Improved performance by X%? Instead: ➡️ Write impact-focused bullets — e.g., “Reduced network downtime by 35% through system upgrades.” Problem 3: Irrelevant experience Amazon Prime Shopper role at Whole Foods is listed in detail. Unless applying for retail or logistics, this distracts. Instead: ➡️ Group unrelated roles under a single “Other Experience” section. ➡️ Focus on transferable skills like teamwork, deadlines, or inventory handling — but keep it brief. Problem 4: Projects without purpose Projects sound impressive but lack outcomes. E.g., “Built an AI model to detect human emotion.” Questions recruiters ask: What accuracy did it achieve? Was it deployed? How did it solve a problem? Instead: ➡️ Add metrics — e.g., “Improved emotion detection accuracy by 20% and reduced processing time by 15%.” Here’s the hard truth: Most resumes don’t fail because candidates lack skills. They fail because they fail to communicate impact. If you're not receiving calls from recruiters despite applying to 100s of jobs, it could be due to your resume. Repost this if you found value. P.S. Follow me if you are an Indian job seeker in the U.S. I share insights on job search, interview prep, and more.

  • View profile for Brendan Williams

    AI/ML Sourcing Specialist | 47 Placements · 270 Candidates · 30-45% Response Rates | I Find Engineers Through Their Code, Not Their CVs | Building Savvy Recruiter | First2 Group

    9,006 followers

    I rejected a perfect candidate last year. Not me personally. My AI screening tool did. 𝐈 𝐝𝐢𝐝𝐧𝐭 𝐞𝐯𝐞𝐧 𝐤𝐧𝐨𝐰. 3 first-author papers on reinforcement learning. 200+ Google Scholar citations. Stanford-funded research. The kind of profile recruiters dream about. The AI scored them 34 out of 100. Why? Their CV said "statistical learning systems" instead of "machine learning." Thats it. One synonym. The tool couldnt make the connection. I only found out because I manually reviewed the reject pile on a hunch. 47 profiles deep into an 8-hour sourcing session. If I hadnt looked, my competitor would have placed them. (Most recruiters dont know their AI screening tools cant distinguish between technical synonyms — and theyre making decisions on hundreds of thousands of applications.) This isnt a one-off. Across 28 businesses, Ive documented the same pattern: AI systematically rejects candidates with non-linear careers, unconventional project descriptions, or terminology that doesnt match the job spec word-for-word. 19% of organisations using AI in hiring admit their tools screen out qualified people. SHRM published that number. The real number is higher. Most teams dont check. Heres what I changed: every AI-screened shortlist gets a human verification pass. Every one. I built a prompt engineering framework for JD analysis so the AI actually understands context before it scores. Time-to-screen dropped 60%. Not because the AI got better. Because a human catches what it misses. The EU AI Act classifies every CV screening tool as high-risk. August 2026. 115 days. Fines up to 35M euros. Most recruiting teams still cant explain what their AI tools actually do. Do you manually check your AI-screened shortlists, or do you trust the scores? Save this before your next screening audit.

  • View profile for Niels Van Quaquebeke

    Human | Professor of Leadership | Author, Speaker, Educator | Psychologist, on a mission to improve leadership at work.

    14,269 followers

    Tightening your decision filters might make you feel smarter—but it could make your decisions dumber. A recent study followed a startup accelerator through 3,580 project submissions and three redesigns of its selection process. The goal? Reduce false positives (backing flops) and false negatives (missing stars). Despite increasing structure—more weight on track record, more screening layers—errors persisted. In fact, the strictest regime produced more mistakes. Two psychological culprits explain why: 1. Mean reversion: Over-reliance on past success dampens our sensitivity to fresh potential. No glittering CV? No chance. 2. Within-type adverse selection: The tougher the screen, the more motivated average applicants are to mimic the cues of brilliance—and get through. But here's what struck me most: every redesign felt rational, even smart. More rigor. More data. More process. And yet, it missed something deeply human. Real potential—whether in people, ideas, or startups—is often messy, unfinished, and hard to score. And evaluators, like all of us, lean toward what's legible, familiar, or credentialed. So what’s the takeaway? 👉 More filters don’t guarantee better picks. 👉 Relying on proxies (track record, polish, fluency) can backfire. 👉 True innovation sometimes sounds awkward at first—because it’s new. If we want to stop selecting the best presenters of ideas and start backing the best ideas, we need to design selection systems that don’t just reward polish. Because sometimes, the next big thing doesn’t look like a sure bet. It looks like a question mark. https://lnkd.in/dRz8FeNe

  • View profile for Ingrid Barbosa-Farias

    Founder | Machine Learning | Molecular Simulation | Drug Development

    3,534 followers

    Adding a short molecular-dynamics (MD) step after docking in virtual drug screening can cut wet-lab costs by > 50 %. Savings that matter especially for startups and small biotechs needing to stretch their runway, yet few are using it. 🔸 <5 ns “shake-out” MD run + MM/PBSA rescoring can more than double confirmed hit-rate by removing docking false-positives (Graves 2008; Brooijmans 2010). 🔸 Wet-lab costs scale almost linearly with compounds tested (~$800/compound). Twice the hit rate means half the compounds and half the spending. 🔸 A few GPU minutes per ligand cost pennies but can save hundreds or thousands in assays. Back-of-the-envelope example (1 M-compound screen) • Docking only → 10 % hit rate (100 / 1 000) ≈ $800 k • Docking + MD → 20 % hit rate (100 / 500) ≈ $400 k Feel free to reach out, if you are planning a screening campaign. Happy to chat. SimAtomic #MolecularDynamicsSimulation #HitIdentification #VirtualDrugScreening #Biotech

  • View profile for Zhengzhong Tu

    AI Prof @ TAMU | AI @ Google Research | PhD @ UT-Austin | BS @ Fudan | Generative AI | Multimodal AI | Trustworthy AI | Embodied AI | Agentic AI | MLSys

    27,155 followers

    ICLR'26 has decided to 𝗱𝗲𝘀𝗸-𝗿𝗲𝗷𝗲𝗰𝘁 papers with 𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗲𝗱 𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀 generated by LLMs. That raises a practical question for every author: How do we verify citations reliably? We’re excited to share our new paper, 𝗕𝗶𝗯𝗔𝗴𝗲𝗻𝘁, an agentic citation verification framework designed to make reference checking auditable BibAgent traces where a claim is supported, surfaces evidence spans, and reports confidence rather than guessing. When a cited paper is behind a paywall, it can switch to a community-based “evidence committee” approach: collect downstream open-access citers, distill what they attribute to the paywalled work, and decide with consensus—or abstain if evidence is insufficient. We also propose a unified miscitation error-code taxonomy and release 𝗠𝗜𝗦𝗖𝗜𝗧𝗘𝗕𝗘𝗡𝗖𝗛, a large cross-disciplinary benchmark of miscitation cases. If you’re building LLM writing assistants, submission pipelines, or research integrity tooling, this is meant to be a step toward: draft fast → verify rigorously → publish faithfully. Ultimately, we hope this research helps 𝗳𝗮𝗰𝗶𝗹𝗶𝘁𝗮𝘁𝗲 𝗳𝗮𝗶𝘁𝗵𝗳𝘂𝗹 𝗮𝗻𝗱 𝘁𝗿𝘂𝘀𝘁𝘄𝗼𝗿𝘁𝗵𝘆 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝗳𝗶𝗰 𝗽𝘂𝗯𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀—so that emerging agents for scientific discovery can build on literature that’s genuinely grounded, not citation-shaped. Paper link: arxiv.org/abs/2601.16993 #AI #LLMs #ResearchIntegrity #OpenScience #NLP #ScientificDiscovery #TrustworthyAI

  • View profile for Jan Beger

    Our conversations must move beyond algorithms.

    89,522 followers

    This paper evaluates a cost-efficient strategy for using LLMs in health systems, addressing the underexplored economic and computational challenges of their utilization at scale. 1️⃣ The paper focuses on query concatenation—a method of grouping multiple clinical tasks and notes into a single input—to optimize performance, scalability, and cost-effectiveness without compromising accuracy. 2️⃣ Over 300,000 experiments were conducted with real-world clinical data, showing that performance deteriorates as the number of simultaneous tasks and text inputs increases. 3️⃣ High-capacity models like GPT-4-turbo-128k and Llama-3–70B showed strong resilience, maintaining accuracy and formatting even under heavy task loads. 4️⃣ Optimal task burden was identified as 50 simultaneous tasks, beyond which accuracy and formatting declined for most models. 5️⃣ The proposed concatenation method for combining multiple queries led to up to 17-fold cost savings at scale compared to traditional single-query methods. 6️⃣ External validation with public datasets confirmed the trends, supporting the utility of high-capacity LLMs in medical settings for cost-effective, large-scale tasks. ✍🏻 Eyal Klang, Donald Apakama, Ethan Abbott, Akhil Vaid, Joshua Lampert, MD, Ankit Sakhuja, Robert Freeman, Alexander Charney, David Reich, Monica Kraft, Girish Nadkarni, Ben Glicksberg. A strategy for cost-effective large language model use at health system-scale. npj Digital Medicine. 2024. DOI: 10.1038/s41746-024-01315-1

  • View profile for Charles Rue

    Global Head of Talent Acquisition at S&P Global

    34,782 followers

    Clearly, the approach to job application assessment will have to drastically change as unmanageable waves of applications - a lot of them being AI generated - are now hitting recruiters. I'm seeing fairly common job openings now often gathering over 1,000 applications within a day of being posted online. A lot of them submitted at odd times, usually the second when the job posting is scrapped by an AI. With the emergence of AI in recruitment, many candidates have put their job search in the hands of an AI agent. These AI bots will scrape relevant job postings, analyze requirements, and then generate an entire application, with cover letter, CV, etc. It can apply to a few hundred roles daily, and will continue doing so until it's switched off, which could never happen as candidates may want to continuously test the market. When looking at these applications, cover letters look exactly the same, using the now familiar AI generated phraseology, and CVs are so similar that they are often assumed by recruiters to be originating from scammers and other impersonators. From a volume perspective, it's impossible for a human to effectively find the right skills and candidates from a stack of 1,000 applications. And old school keyword based assessment are no longer effective because AI bots are peppering generated CVs with keywords founds in the job posting, when they are not copying and pasting entire sections of the job spec. What are the solutions? In the short term, from my perspective, beyond using the usual behavioral signal screening (e.g., time spent on job descriptions, etc.) and adding friction to the application process (e.g., limit how many roles someone can apply to, ask simple thoughtful questions to make sure they’ve put in real effort), I believe firms should start rethinking assessment relative to skills and roles. More specifically, 1). Embed more pre-qualification assessments or simulations before a recruiter spends time reviewing the application (e.g., role-relevant assessments, skills quizzes, situational judgment tests, etc.) early in the funnel to make sure that candidates meet basic requirements. 2). Build talent pools (vs focusing on requisition based recruitment). Basically, proactively building groups of interested and qualified people we can reach out to when the time is right. This requires more planning and a mature TA org though. 3). Ultimately, firms should actively embrace resume-free screening for selected roles. Basically, we skip the CV entirely and ask for proof of ability (e.g., a test) etc. In the long run though, I'd be interested in seeing how the most advanced assessment suppliers will innovate in the areas of candidate identity, skills & reputation portability and recruiter-side AI used to contextualize fit (e.g., using career trajectory, digital footprint, etc.) #JobApplication #CandidateAssessment #TalentAcquisition #Skills https://lnkd.in/eMGpHD2c

  • View profile for Nishant Nihar

    Quizzer | Content & Marketing | AI | Activist | TEDx Speaker | Stuck between Humanity, Business & Tech

    5,867 followers

    One of the first things that companies recruiting candidates absolutely need to do is to remove the concept of CVs. People coming from fancy colleges, landing jobs with big brands, who in turn boast of hiring from fancy colleges, that in turn boast the big company logos in their placement brochures. This is a vicious cycle. Education has become a business. Guilable parents will pay increased fees next year hoping to secure a better future for their ward next year. Marketing drives Sales. Instead, simply put in the requirements you need. Let the candidate tick the ones he/she is good at. Don't input his/her College/Ex Company name. If the fellow aces the test, he/she justifies their skills - that's all that you need. In the culture-fit, and subsequent rounds, you can get a better picture anyway. But screening based on brands should stop. This will also help folks from madly rushing after branded institutions and instead, pursue knowledge - the need of the hour. And most importantly, create. level-playing field for candidates coming from unheard colleges and companies, but skillful, to make an impact. Inclusivity, in its true sense. I have recently had a harrowing experience with my start-up hiring, and I completely decided to stop. Only well-vetted referrals. As founders, some of you may relate. At the crux of a burgeoning economy lies the time-gap challenge. Rejecting 100 CVs to land up at that one, who probably was the underdog is traumatic. Instead, twist the process. Skills first - brand names later. Think. #hiring #hr #startups #jobs #skills #employees

Explore categories