HermanScience was born out of Bill Reed’s realization that stress, disengagement, and a lack of trust were at the heart of organizational challenges.
In his interview with
Website Planet, he shared how years of working with top companies like Adobe, Cisco, and Visa revealed that existing solutions failed to address these human factors effectively. That’s why he created a platform that leverages cognitive AI and visual neuroscience to measure and resolve these critical issues.
By tackling the root causes of workplace inefficiencies, HermanScience is transforming the way remote and hybrid teams operate, fostering a high-trust, low-risk culture.
What inspired the creation of HermanScience, and how does your platform address the unique challenges faced by remote and hybrid teams?
I spent years as a co-founder of a consulting firm that worked with major clients like Adobe, Cisco, HP, Oracle, SAP, and Visa, as well as many smaller firms. During that time, I noticed recurring challenges these organizations faced—especially in sales and marketing. However, at the core of every issue was a common theme: people.
The same key problems kept surfacing—stress, disengagement, and most importantly, a lack of trust within the organization. It became clear that no one had effective solutions to address these challenges. That realization was the turning point for me.
I decided to step away from my previous company and founded what is now HermanScience. Our platform is designed to measure and resolve these three pervasive issues—stress, disengagement, and trust. These factors contribute to 90% of organizational security risks, from phishing attacks to workplace safety incidents and overall productivity losses. At HermanScience, we focus on diagnosing these issues and providing solutions to fix them.
Can you explain how HermanScience’s use of cognitive AI and visual neuroscience assessments enhances traditional Human Risk Management practices?
We take Human Risk Management a step further by introducing what we call
Human Engagement Risk Management. While terms like Human Risk Management have been popularized by Gartner and Forrester, primarily as a replacement for security awareness training, the concept itself has been around for quite some time. Traditionally, it’s been more of an HR term related to the risks posed by people within an organization.
At HermanScience, we leverage cognitive AI and visual neuroscience to measure what’s truly happening in people’s minds. We believe we’re the only ones capable of accurately measuring this. Our platform uncovers what we refer to as biomarkers—neuroscience-based indicators that reveal how a person’s brain is wired. These biomarkers allow us to identify strengths, attributes, and soft skills, which are crucial in understanding engagement and potential disengagement.
We also measure factors like norepinephrine, serotonin, and other neurotransmitters that are directly linked to trust and stress. Additionally, we evaluate how a person’s soft skills align with their job requirements, providing a comprehensive view of their professional well-being.
What sets us apart is that our platform seamlessly collects this data while individuals are training. Whether they’re engaged in security awareness, leadership training, or generative AI misuse courses, we assess their cognitive responses in real time. It’s not an intrusive process—there’s no “Big Brother” element. The data is collected organically as they interact with our gamified, escape room-style training modules.
This allows us to evaluate their performance while they train and, based on their results, prescribe additional training, such as trust-building, soft skills development, stress management, or leadership courses. Instead of just providing generic training, we focus on why and how individuals learn, tailoring the training to their specific needs and cognitive responses.
How does the integration of ChatGPT into your platform improve personalized messaging and engagement with employees and potential candidates?
Generative AI tools like ChatGPT, Claude, and others all suffer from the “garbage in, garbage out” problem. It’s important to understand their limitations. For example, if you ask a simple query like, “How many companies have revenue greater than $1 billion?” you’ll get a range of answers, from 250 to 25,000, and all of them will be wrong. Therefore, it’s essential to be cautious when using generative AI.
Within our platform, once an individual completes a training course that includes our assessments, they are directed to their personalized assessment report. This report reveals how their brain is wired and what their neurochemistry looks like. We also generate specific prompts for them, addressing various aspects of their performance and development.
Furthermore, our platform offers training courses that teach users how to properly—and more importantly, safely—use generative AI. A major concern for security and HR professionals is the potential misuse of generative AI, which can expose intellectual property and personally identifiable information and lead to fines or lawsuits potentially costing millions.
So, in addition to training employees on how to use generative AI productively, we also focus on preventing misuse. While individuals are training, our platform continuously evaluates and assesses their cognitive responses. This helps us identify biomarkers that may indicate potential issues, such as overwork, stress, or trust-related concerns.
Ultimately, we don’t just use generative AI for our own purposes; we also empower users to safely leverage it within the platform, ensuring they use it effectively and securely.
Could you share a success story where HermanScience’s approach significantly improved a client’s team performance or reduced security risks?
One success story involves a Chief Human Resources Officer (CHRO) from a leading technology firm. Initially, they started using our platform for recruiting and employee evaluations before integrating it with our training programs. In one particular case, their recruiting team used our Career Quotient Indicator (CQI) assessment, which takes about 9 minutes to complete. Unlike traditional methods like DISC, Big Five, or Myers-Briggs, our assessment goes much deeper by utilizing visual neuroscience to identify biomarkers.
They were concerned about a particular candidate due to several red flags revealed by the assessment. However, the CHRO overruled the team because they struggled to find a qualified candidate and decided to hire the person anyway. Unfortunately, the hire turned out to be a disaster, and they had to let the person go after only three months. This situation ended up costing the company hundreds of thousands of dollars. The CHRO later admitted, with some humor, that she should have trusted our assessment and dug deeper before making the hiring decision.
In another instance, an employee completed a security awareness training program from a more widely-used provider. Afterward, they also completed one of our assessments, which highlighted yellow and red flags related to trust and stress factors. Unfortunately, the company did not take any action based on these warning signs, such as increasing email security filters or providing additional training. A few weeks later, the employee clicked on a phishing email after completing the security awareness training. This led to another situation costing hundreds of thousands of dollars in damage.
These two incidents taught the company a valuable lesson. Our system is highly reliable, with a 93% accuracy rate, compared to the 50-60% accuracy of other solutions. They learned to trust our assessments, take appropriate actions, and implement stronger training and security measures. Now, they ensure that potential security risks are properly addressed, and they put safeguards in place for employees who may show signs of vulnerability.
What role does the GenAI Readiness Assessment and Training play in identifying soft skills and communication preferences, and how does it contribute to building high-trust, low-risk cultures?
We’ve learned that the top security concerns for professionals in both security and HR are very similar. According to Gartner, the number one concern for both HR and security professionals is generative AI. This is a concern that we’ve heard directly from hundreds of CISOs and CHROs. The reason for this is that 90% of employees, including 93% of Gen Z, are using generative AI, whether authorized or not. However, virtually none of them have received proper training on how to use it safely. As a result, they often rely on misinformation, assuming it is accurate, which can lead to significant issues in terms of productivity, security, and even exposing sensitive data on the web.
The problem becomes even worse when third-party vendors are involved. Companies often have no control over their third-party employees and don’t know who they can trust. Sensitive company information may be exposed unintentionally through generative AI, and organizations are left wondering if they can trust these external employees or whether they are properly trained in using AI tools without compromising security.
This is where our solution comes in. We address these concerns by offering engaging, interactive training experiences, like gamified escape rooms, to train employees on how to use generative AI safely and productively. Employees not only learn how to prompt AI effectively but also how to use it securely, minimizing the risk of exposing sensitive information. At the same time, we assess them during the training to identify whether they pose a security risk or have any trust issues.
For third-party vendors, before sharing sensitive information, we ensure that their employees complete our GenAI readiness training and assessment. This allows companies to verify that they can trust these external workers with sensitive data.
Unlike other solutions that only provide training on what to do, our platform goes further by assessing how and why employees use AI, measuring the risks, and offering a deeper understanding of potential vulnerabilities. We provide this service in a more cost-effective and time-efficient way compared to other solutions, with higher accuracy.
As a Veteran-owned company, how do your core values of trust, transparency, and teamwork influence your approach to Human Risk Management and organizational culture?
We define our approach as human engagement risk management as we believe this term goes beyond the industry standard of human risk management. While the latter term is commonly used to address security awareness and HR-related issues, it’s essential to consider employee engagement. According to Gallup, 80% of employees are disengaged, and of those, 20% may be so disengaged that they become disgruntled or even pose insider threats. This disengagement is often tied to stress and trust factors.
Drawing from our experience with the Veteran community, especially from our COO, who previously served as the Admin HR director for the US Navy SEALs, we’ve learned how vital it is to address these issues. His 13 years of experience with the Navy SEALs have greatly influenced our approach to managing both our solutions and our own company. We operate on the Entrepreneurial Operating System (EOS), which is a structured system that emphasizes core values—trust, transparency, and teamwork. These values are central to our success.
We recognize that other organizations also have their own core values, and they want to ensure that both their current employees and new hires align with their culture. We can help measure and test for this alignment, and more importantly, we assist employees in maintaining these values. We understand that trust and stress issues are daily challenges, and addressing these factors is crucial for improving employee engagement. Ultimately, we aim to tackle the three primary causes of 90% of security and HR incidents—stress, trust, and disengagement—by fostering a community that aligns with an organization’s culture and core values.
Find out more at:
www.hermanscience.com