Connect with us

Bussiness

UL’s leap into the genAI evaluation business raises key questions

Published

on

UL’s leap into the genAI evaluation business raises key questions

UL’s new offering “assesses AI model transparency, which is the ability to understand how an AI system makes decisions and produces specific results,” said the company’s news release. “By examining key areas such as data management, model development, security, deployment, and ethical considerations, the benchmark provides a clear and objective rating of an AI system’s transparency and trustworthiness that results in a marketing claim verification.

“A UL Verified Mark for AI Model Transparency may be displayed on products achieving a rating. Systems are awarded a score between 0 and 100 points, with higher scores indicating greater transparency. A score of 50 or less is considered ‘not rated,’ indicating significant transparency issues. Scores between 51 and 60 are rated as Silver, reflecting moderate transparency. Scores between 61 and 70 are rated as Gold, indicating high transparency. Scores between 71 and 80 are rated as Platinum, reflecting very high transparency. Scores of 81 and above are rated as Diamond, indicating exceptional transparency.”

Jason Chan, the UL Solutions VP for data and innovation, said in a CIO interview that the company chose to only evaluate what the enterprise adds on top of the foundation model. 

Continue Reading