New features and more
from the last 12 months
At Silverpond we are always looking for ways to improve HighLighter – whether that’s new features, implementing suggestions from users, or making it run better under the hood.
HighLighter has grown up in the last year, and there’s a lot to explore. Set aside 15 minutes this week to have a look at our new features and give them a go yourself!
Viewing & Mapping Annotations
Monitoring a labelling team’s work is key to ensuring an ML model learns what it needs to. We developed the following features to help managers view and alter labelling team efforts – and all in a traceable format to support the transparency of your project.
Go to your project’s Annotations tab to view all the labels made by your team and filter for convenience. Even better, you can edit the annotations and see more details right from thumbnail view.
In addition to seeing all the annotations in thumbnail view, you can also view them plotted on a map.
Quality training data is key to creating a great machine learning model. To assist with reviewing the quality of the annotations, we’ve developed the annotation queues functionality. This is beneficial in three ways
1. Quality assurance.
Reviewing the annotations allows you to monitor the key work performed by the labelling team.
2. Add metadata
Adding metadata to annotated images is especially useful as you iterate through your project or you bring in a subject matter expert to review the annotations already made
3. Division of skilled and unskilled labour.
Depending on the objects you are identifying, the annotation queues can be utilised to make the most of your skilled labelling team. In our pavement defects project, we could use our unskilled labellers to identify the defects and importantly submit the images of pavements where no defects exist. The subject matter expert would then be able to review these annotations and use their experience to make judgement such as the severity of the defect.
Inter-Rater Agreements… a fancy word for QA
The Inter-rater agreement is one of our most exciting new features and one that will help with quality control of your projects.
The inter-rater agreement allows you to set the same images to be annotated by different labellers and then compare. A score is determined by HighLighter as to how much the labellers ‘agree’
This is important as it allows you to give feedback to labellers, or open up discussions as to what constitutes the boundary of the object being labelling
The nature of machine learning projects requires us to constantly test and evaluate. To put context to the training and evaluation data in Highlighter, we have developed a research plan section where you can develop and save your research plans, run and record experiments.
By documenting the research and experiments in one location, technical and non-technical team members can track the development of the project, providing context and accountability.
For on-premise Enterprise installation
HighLighter is now available for installation on premise for our enterprise clients. Despite all the advances with cloud computing, on-premise servers are still preferred by a number of organisations for security, regulation and even cost considerations. Using industry-standard Docker containers, our team can ensure a simple and effective installation, allowing your team to get going with your ML projects
Export your data with GraphQL
The aim of HighLighter, since its inception, has been to make it easier to run machine learning projects. The machine learning may be hard, but the processes and supporting infrastructure shouldn’t be.
To that end, we have added a GraphQL API to import, export and utilise your data from HighLighter. You can integrate Highlighter with your own systems, import and export data out for training and for deployment
Want to know more...
Want to try HighLighter out for yourself? Or do you want to chat to our team about how HighLighter can help you manage your machine learning projects?
Complete the form below and we’ll be in touch shortly.