shutterstock_245972422

In this case study, we aim to cover two things:

1) How Data Science is currently applied within the Logistics and Transport industry

2) How Cambridge Spark worked with Perpetuum to deliver a bespoke Data Science and Machine Learning training course, with the aim of developing and reaffirming their Analytic’s team understanding of some of the core Data Science tools and techniques

Perpetuum-1

The bespoke training course we ran with Perpetuum was the Core Data Science Using Python. Get in touch to see how we can help benefit your organisation with this Data Science short course.

Get in touch with us to learn more about the course! 

Get in Touch

Introduction

The Logistics industry is enormous, with the industry estimated to reach a value of $15.5tn by 2023, and volumes estimated to likely reach 92.1bn tonnes by 2024. Today, supply chains are facing high levels of change due to internal and external pressures, from rising costs, the prevalence of challenger start-ups changing the landscape through technlogy and automation; prompting the digital transformation of a seemingly traditional sector, and Data Science and Machine Learning is the driving force behind a lot of this change.

Machine Learning has the potential to revolutionise the Logistics and Transport industry through determining the most important factors for the success of a supply network, whilst continuing to learn in the process.

So we’re hearing a lot about how other industries have applied Data Science to transform their spaces, but how are the Logistics and Transport industry currently applying Data Science to get ahead of the curve and become operationally productive?


Current applications of Data Science in Logistics/Transport

Highlighted by DHL in recent years, Big Data and logistics are made for each other. Companies are often sitting on masses of under-utilised data that could aid them in a number of ways.

Some of the current applications of Data Science by data-driven businesses within the industry include:


Case study

We recently delivered a bespoke Machine Learning and Data Science two-day course as part of a professional development programme for some of the staff at Perpetuum, a global leader in the provision of information through its award-winning, self-powered wireless sensors, and vibration engineering in the Rail industry.

To enable Cambridge Spark to adapt the delivery of the content to the level of Python expertise required from the Perpetuum participants, we provided Python exercises on our proprietary adaptive learning platform, K.A.T.E.®, giving us full visibility of the abilities of each individual, enabling a personalised learning experience on the day.


How Perpetuum are applying Data Science

We spoke to the Software Engineer and Analytics Team Lead at Perpetuum to gain more insight into how they are already applying Data Science within their organisation. Here’s what he had to say:

Using the data we receive from vehicular sensors and turning it into live information on the condition of the assets, we save companies significant amounts of money every year. Rather than having to maintain everything on a time or distance basis it’s done on the actual condition. They therefore improve safety — which is a primary concern — and can ensure they avoid having to scrap their assets earlier than required, inflicting unnecessary costs.

Today, Perpetuum have a huge number of deployed sensors on vehicles that run on networks around the world — all depending on huge amounts of vibration data — which their analytics team process and make sense of by turning it into actionable information. From there they look for the types of trends and traces which would indicate a need for maintenance.


Delivering advanced Data Science and Machine Learning modules as a means of continuously professionally developing their team

The course was delivered by one of our expert tutors, enabling the team at Perpetuum to dive deep into the theory of various modules, before undertaking hands-on exercises with Jupyter notebooks to reinforce and practically apply their learnings.

During the bespoke two-day course, their specialist Analytics team undertook the following modules and topics:

Unsupervised learning

  • K-means clustering
  • Hierarchical cluster analysis
  • Density-based clustering (DBScan)

Supervised Learning

  • Supervised Learning
  • The k-Nearest Neighbour algorithm
  • Overfitting, underfitting, bias-variance tradeoff
  • Cross-Validation and hyperparameter tuning
  • Naives Bayes

Decision Trees & Random Forests

  • Decision Trees
  • Intuition behind Bagging and Bootstrapping, Concept, Algorithm, Random Forests in scikit-learn
  • Boosting, Gradient boosting

Boosting methods

  • Intuition behind Boosting classifiers, visualisation, Boosting methods in scikit-learn
  • Adaboost, XGBoost, LightGBM
  • Stacking, Stacking with cross-validation, Stacking in scikit-learn

Outcomes of the training

When asked to summarise the experience of working with Cambridge Spark to provide continuous professional development for their Analytics team, here’s what Perpetuum had to say…

The programme was great, we really liked the approach on the material. I think it was an ideal way to ensure we knew a reasonable level before we started the course. The overall communication was good and Tim [our tutor] was excellent; we were really impressed with him. He was competent and able to explain things at a good level. 

Interested in training for your teams?

Whether you're looking to train 5 people or 100 people, we have a variety of scalable training solutions to help you address a wide spectrum of training needs within the fields of Data Science, Artificial Intelligence, or Software Engineering.

Please contact us with your details and any known requirements. We'll then get in touch and guide you through every step of the way.