Here you will find experience reports, points of view and opinions of our members on a variety of industry, business and social topics.  We have just started publishing, so follow us on

Social Media or visit this page again for more.

All contributions are available for download at the end of the correspondning post.


The Importance of IoT in a Post Covid-19 World

We are living in unprecedented times. The Coronavirus pandemic has taken many lives and entire populations are slowly coming out of severe lockdown measures to control the spread of the virus. Millions of workers have been furloughed with devastating results for the world economy. Almost a decade worth of growth has been wiped out from the FTSE 100 in the space of a few months.

Source: Factset

Some markets have thrived during the pandemic. E-Commerce retailers and online groceries have seen their business boom. With so many people working and socialising from home, demand for video conferencing and faster broadband is increasing exponentially. Manufacturers cannot keep up with the demand for personal protective equipment and perspex screens.

Many industries, however, have been decimated and will take a long time to recover. According to the US Bureau of Labor Statistics, unsurprisingly the Leisure & Hospitality, Wholesale and Retail, Manufacturing and Construction industries have been hit the hardest. BEACH (booking, entertainment, airlines, cruises, and hotels) stocks saw a $332B decline in value during the first month of the pandemic alone as governments restricted travel to over 100 countries*. Sports and concert arenas, cinemas, restaurants, bars and theatres that were empty for three months are now slowly coming back to life.

*Source: Visual Capitalist – Ycharts

So far, not so good, but there has never been such a compelling event for technology companies to innovate, collaborate and deliver IoT solutions to embrace the new normal.

IoT combines connected sensors with automated systems to collect and analyse information to help with a particular task or learn from a process. This will be critical to maximising response times with respect to monitoring the spread and treating the victims of the virus. There is also no industry vertical that will not benefit IoT to reduce the impact Covid-19 has had on businesses and to help get companies back on their feet.

Controlling the Virus and Staying Safe

Every government has faced the monumental task of gathering the data to make informed science-based decisions on how to control the spread of the virus and protect their healthcare systems from being overloaded. According to the Institute and Faculty of Actuaries there are a number of issues in the UK, as an example, which affect the accuracy of data. The daily rate of new admissions of COVID-19 cases is based on a trust catchment population of NHS Trusts reporting numbers each day. The problem with using open data is the full detail is not available to perform data checking and data validation. The number of false positives and false negatives also contribute to the issue.

The world needs to be better prepared to deal with virus outbreaks and the development of an accurate and reliable testing regime is essential. Track and trace testing will require accurate testing devices. Connected thermometers and test kits can be used in hospitals, airports, offices and at public gatherings. Automatic uploading of results to software platforms that can analyse the data using machine learning and AI to remove the element of human error can produce heat maps for real-time disease surveillance to identify and act upon new clusters of infections.

Facilities managers are already seeing the operational benefits and cost savings from IoT sensors. Many of these use cases could be repurposed to make the building environment safer. For example, occupancy monitors can optimize working spaces and monitor if social distancing guidelines are being adhered to. Smart hand wash dispensers not only alert when they need to be refilled, they also provide useful data on frequency of use for monitoring hand hygiene. Air quality and humidity sensors can provide a dashboard for ensuring the building is a Coranavirus safe environment.

Distribution of Pharmaceuticals

Vaccines require temperature-controlled environments from production, through distribution and to their destination to maintain their full effectiveness. Huge advances are being made in logistics tracking as IoT LPWAN technologies allow devices with battery lives of 5 to 10 years to not only track trailers and containers but also cost effectively track crates and roll cages throughout the supply chain. Cold chain monitoring and tamper detection of trailers allow pharmaceutical suppliers to safely deliver their goods and this will soon be enhanced by IoT technologies that allow the production of stick-on smart labels that can monitor individual packages of medicines for temperature fluctuations and alerting if an individual box has been tampered with.


Every industry vertical can benefit from IoT to enable companies to run more efficiently and keep both staff and customers safe. Take the airline industry, struggling with the impact that social distancing will have on its business, as an example. Many airlines and aviation services company have already announced job losses and more will follow. There is much that IoT can do to streamline the handling of passengers and luggage both on the ground and in the air. For example, remote monitoring of ground support equipment used to service the aircraft between flights including ground power operations, aircraft mobility, and cargo/passenger loading operations. Knowing where equipment is and if it is in a fit state of operation will ease the pressure on a reduced services workforce. Tracking of Universal Load Devices will ensure that luggage goes to its intended destination. Wireless enabled smart labels could also be used on individual luggage items to further improve tracking efficiency through the entire baggage handling process.

The retail and hospitality industries have faced a real challenge to adapt to the new social distancing policies. It’s difficult to operate profitably with a reduced customer base, especially after several months of zero revenue and the additional costs of remodelling their premises. IoT will be vital in terms of optimizing the process including occupancy sensors that can automatically monitor and control footfall in and out of shops, bars and restaurants, digital signage to inform shoppers of goods and prices and smart payment solutions all aimed at keeping things moving swiftly, maintaining safe distances and maximizing spend. Difficulty in predicting demand as countries gently ease out of lockdown means that retailers will have to optimize their supply chains and stock control systems and will be made even more difficult with potential localized lockdowns until a vaccine is available.

As factories start to ramp up, the manufacturing industry will need to lower costs and streamline their operations whilst ensuring employee safety and security. Automating and pro-actively monitoring machinery to extend maintenance cycles and monitoring waste can have a dramatic impact on costs. Many manufacturers have little or no visibility of up to 10% of materials and parts in transit resulting in increased Capex due to the need to overstock when items go missing and additional Opex costs to recover. Sensors can be used to provide greater visibility across inbound and outbound logistics by tracking Returnable Industrial Packaging, enhanced by AI to optimise and further strip out costs.

Connectivity – One Size Will Not Fit All

So, how do we get the data from multiple sensors, some static, some moving, in internal and external environments? The IoT wireless landscape is broad with a range of network connectivity options. Each network has its own unique set of attributes in terms of data bandwidth, latency and range. Each use case has its own unique requirements in terms of cost, data rate, form factor, provisioning and power consumption. Collecting data from remote sensors requires some thought to select the right connectivity option and one size clearly does not fit all.

This can be illustrated by considering the connectivity requirements for three of the key use cases in the post Covid-19 world.

Making buildings safe: In the building case, most of the sensors transmit status or event-based messages with low data requirements. Latency is important but not critical. Sensors will typically not have a mains power source so a long battery life will be required. LPWAN technologies are the ideal fit but what if higher data payloads or video monitoring are required? Most buildings will have Wi-Fi but managing usernames and passwords for sensors connecting to Wi-Fi routers in an uncontrolled environment is problematic and Wi-Fi will also significantly reduce battery life so cellular may be a better option for sensor devices requiring a higher data bandwidth.

Tracking of pharmaceuticals: There are levels of granularity in asset tracking. In some cases geofencing is sufficient to understand if a vehicle containing a consignment is generally where it should be (distribution centre, pharmacy, hospital). In other cases GPS tracking of the vehicle in transit is required. The presence of a power source on the trailer reduces the need for battery-only operated devices so cellular connectivity can be used but a power efficient LPWAN network may be required for trailers without a power source. LAN technologies can be used at building waypoints and satellite connectivity may be required for trucks travelling outside the coverage footprint of cellular and LPWAN networks. Tracking of pharmaceutical packages using smart labels takes it to the next level and needs a much cheaper, potentially disposable sensor device that can locate, track and monitor packages for temperature and tampering in real time.

Optimising manufacturing: Equipping factories with IoT sensors is a bit of a mixed bag. Modern machines have interfaces that can be used to connect to sensor devices to monitor and control but older machines require sensors that can derive the status of the machine from, for example, temperature and vibration. Cellular and LPWAN coverage inside factories can often be marginal but most facilities have managed Wi-Fi access allowing sensor devices to be deployed throughout. RFID tags have typically been used to track high value assets moving in and out of the factory but this does not facilitate external tracking thus requiring WAN connectivity.

The perfect scenario would be a universal radio chipset/modem that can connect to any wireless technology at the lowest hardware cost but that’s just not realistic. The cost of single RF technology chipsets/modems varies enormously depending on the wireless technology. Even with volumes of scale, a multi-technology solution would blow many cost sensitive IoT use cases out of the water. Despite the hype that IoT requires a one size fits all connectivity, multiple RF technologies will continue to co-exist and, rather than connectivity providers trying to make a land grab, solution providers and system integrators need to understand the pros and cons of each and choose wisely.

Which IoT Platform?

Managing the data received from an ecosystem of sensor devices from different manufacturers via multiple wireless technologies is not a simple task. There are 620 IoT platforms on the market, an increase of 170 in less than three years. A confusing array of choices although the top 10 providers have 58% market share.

Each platform provider offers different functionality; device management and diagnostics, connectivity management, data reporting and analysis and each has a different pricing policy based on data volume, number of users, data analysed centrally or at the edge which makes choosing the right platform difficult. Solution providers and system integrators need to take a considered approach to choosing the platform that fits the use case and best integrates with the customer’s IT and OT systems.

According to IoT Analytics, customers have nine common platform-specific purchasing criteria. The three top criteria are:

1. End-2-end security (i.e. how secure the platform is including the data exchange and user and device authentication)

2. Scalability (i.e. how easy it is to go from 100 to 100,000 connected endpoints)

3. Usability (i.e. how user friendly the interface is)

is important to determine the trade-offs and choose a platform before starting a proof of concept. Is security the key issue? Is interoperability important to be able to handle multiple device suppliers? Does the platform need to be able to manage more than one type of wireless connectivity? Better to start off on the right foot rather than having to compromise or change direction down the line.

Breaking the Proof of Concept Cycle

A plethora of sensors, devices, connectivity and platform options has manifested in a cycle of trials and proofs of concepts in order for businesses to quantify the return on investment for their specific IoT use cases. So many fail in a mire of data collected to satisfy the requirements of multiple stakeholders that no-one knows how to make sense of. Yes, some use cases have unique requirements but there is only so much data that is really meaningful and there are many solutions that have been tried and tested for similar use cases that could be easily adapted with minor adjustments.

We are facing a compelling need during the Covid-19 pandemic to restart economies and we need to move much faster than we have in the past. Many solutions can be repurposed to shorten the time to market. For example, the data from smart handwash dispensers in commercial buildings that is used to check when dispensers need to be refilled can be combined with data from washroom door sensors to understand the usage pattern as part of a wider hygiene monitoring program. Data from air quality, temperature, humidity, movement and desk occupancy sensors can help facilities managers to optimize buildings to provide a safer environment for office workers. Why reinvent the wheel when the building blocks are potentially already there.

Start with the Problem not the Solution

All countries will face the same challenges, but we must take a look around the world to learn and apply. Asia has much experience of dealing with virus outbreaks and has a rich source of track and trace data that can be used to enhance IoT solutions aimed at controlling the spread. Many of the Covid-19 deaths in Europe were related to the elderly, care homes and those with underlying health conditions. Solutions aimed at protecting the vulnerable will be vital. Africa has high levels of malnutrition and disease that makes Covid-19 even more deadly. The use of IoT to improve farming and agriculture and make hospitals more efficient will be key in the long run.

IoT will have an important role to play in controlling the spread of Coronavirus, maintaining a safe environment for people and restarting the economy but industry will need to be pragmatic. It’s not about pushing technology, it’s about understanding society and industry’s pain points, innovating where necessary and applying what already works. All parties need to be open and to work together on a journey of discovery for the greater good of a world coming to terms with the new normal.

Kevin Maher is an independant telecom consultant with more than thirty years experience, specialising in the IoT industry. He is also an associate partner in the Solutionheads network.

The Importance of IoT in a Post Covid Wo
Download • 517KB


12 Ansichten0 Kommentare
  • Using automation can lead to functional and technical stability of multi-tiered, complex software projects.

  • Automating regressions AND functional tests can help implement Test-Driven Development (TDD)which Test-Driven Development defines requirements for developers better than acceptance criteria.

  • Projects should not fear rapid cycles with large numbers of small changes, even when deadlines are tight, as this improves stability and increases quality.

Many Software projects become chaotic despite good project management and carefully considered development processes. The deadline factor, rogue development of prototypes, or the existence of many heterogenous delivery units combine to create chaos. We were able to calm a chaotic project by implementing test automation and test-driven development using our own Test and Automation Framework (TAF-Nuts) – characterized by an emphasis on practicality (just getting the job done), trust (let good testers test and let good developers develop) and simplicity.

This article identifies the characteristics of the chaos and describes how ProArchCon made the project successful. By explaining our approach, we offer suggestions for projects in similar situations. ProArchCon is a member company of the Solutionheads EEIG and the author of this article is an associate member and the senior technologist of the group.

The Situation

At one of Germany’s national mobile, fixed-net and cable network and service providers, our role was to augment the combined on-shore / off-shore, internal delivery unit’s front-end team in software quality for the customer self-service web portal. The legacy system architecture included four backend applications, one front-end application and two separate service layer applications. Each element in the architecture was developed by a different external, off-shore vendor and was tested by further external, off-shore units. Only the acceptance test was carried out by onshore, internal units. The front-end development was carried out by internal delivery units (albeit offshore).

Since software users only ever see the front-end, each error (even those originating in the backend) is perceived as a front-end problem. We saw our role thus in terms of total quality, not confined to front-end quality.

Ensuing Chaos

We entered the project and found it characterized by untested backend and service layer software being rolled into the acceptance test environment. The negative impact was sufficient to completely block development on functional requirements and force a focus on stability issues. This unplanned new focus only exacerbated pressures caused by aggressive deadlines.

The front-end software was developed and functionally tested against stubs. These stubs were based on the official documentation of the backend. Only after successful integration in the test environment (removing the stubs), could the front end declare its software as finished. The front-end development was thus dependent on a stable and functioning backend.

We decided to halt front-end development prior to assessing the quality of the backend and service layer. ProArchCon found evidence, that software was being released uncontrollably into and crashing in the test environment. Indications were:

  • When using tools to test the service, there would be HTTP 400-type errors coming from both the backend and the service layer.

  • The service would return a HTTP 200-type response, but the values and the format of the JSON response did not match the official interface specification.

  • Between any two runs of the software, the errors returned were functionally different, indicating that there were uncontrolled and unplanned changes to the software being constantly implemented.

Implementing Test Driven Development and Automation

ProArchCon attempted to communicate the problems in test to the responsible levels of management in status reports. The individual delivery units claimed that there was no problem, or that the problem was not caused by their unit. We were faced with the situation of having to develop against software that was not ready and of having to prove to others that the software failing in test. To do this and to rapidly improve software quality and stability We introduced test automation and test-driven development into the project. Neither method is new, but our TAF-Nuts framework with its strong focus on practicality, trust and simplicity is rare and was new to the customer and the project.

We find the following explanation of TDD very appropriate:

“Test-Driven development is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards.” - Farcic, Viktor. “Test Driven Development (TDD): Example Walkthrough.” Technology Conversations, 20 Dec. 2013,

Test Automation of the Regression Test

We used our own Nuts automation tool. Testing the database was accomplished using the DB Nuts for Oracle. Testing the backend processes was accomplished using the BE Nuts for Linux. Testing the service layers was accomplished using the REST and the JSON Nuts and testing the front-end was done using the Web Nut. All five Test Nuts were then combined with a Chain Nut and the chain was scheduled to run every 30 minutes.

A “Nut” is an automation abstraction for a particular technology that allows the automation to be configured rather than “developed” and without knowledge of the technology-specific software implementation.

The same test of the software was run every thirty minutes. Full documentation of each test run was then sent to the developers. Because the first issue was stability, we provided an OK / NOK indication for each service and each layer of the architecture.

Reviewing the results of several days of continuous test, we discovered changes being made to the software and released into the test environment without previous test throughout the day. ProArchCon used the documented test results to communicate to project management and the delivery units and force a focus on stability issues rather than functional development alone.

Our test automation approach further aided backend and service development by indicating which Nut was throwing which error. The development teams could quickly identify which layer and which application was erroring (by comparing the output of the backend, Linux Nuts with the output of the database nuts).

We gave the Nut automation software and configuration to the different delivery units so that they could perform the regression test on their software before delivering into the acceptance test environment.

Test Driven Developement (TDD)

Using test automation for regression test was necessary to improve the stability of the software delivery from the delivery units and to align the efforts of all units. There was still the challenge of meeting aggressive project deadlines fulfilling the functional requirements. To aid in this area, we implemented an end-to-end, test-driven development methodology.

The test-driven development was set up in the following manner to allow for rapid implementation of functional changes to the software:

  • Implement the end-to-end perspective – focusing on logical uses of the software originating with a user request and encompassing all actions at all software tiers to respond to the user request

  • Cycle through large numbers of small changes

  • Use quality gates prior to the release for acceptance test

These three points together help to quickly develop and iteratively expand upon a minimum viable product (MVP) which is essential in defining a functional change for customer-facing software.

End-To-End Perspective

All test cases were first written in the user / customer perspective. The functional, test-driven test case was then automated on the front end. At the beginning of each implementation cycle, the new, automated, functional test was executed. Based on the detailed results, TDD cases were then written for the service and backends layers of the software.

The functional test for the TDD was automated using the ProArchCon Nuts Framework. By automating the test, the developers could expediently execute the test as often as they needed to see if the changes, they developed were successful. Once the automated, TDD functional tests were successful after an implementation cycle, the results were incorporated into the automated regression tests.

Large Number of Small Changes

We convinced the developers to drastically reduce the size of their developed changes. The goal was to be able to have two implementation cycles (including test) per day. This allowed the software to grow slowly in functionality, while being able to pinpoint the causes of errors quickly.

The limited scope of the implementation cycles required more planning at the developer level. While project management concentrated on the requirement and acceptance criteria level, we sat down each morning and each afternoon with the lead developers to plan the detailed changes. This “overhead” focused the efforts of the entire and improved quality and consistency across all tiers of the solution architecture.

Quality Gates Prior to Release for Acceptance

Using a combination of the automated regression and functional TDD tests meant that there could be a strong quality gate at each iteration of software development. An automatic regression test marked the beginning of each implementation cycle across all ties of the solution. Once it was certain that the regression tests were successful, the end-to-end TDD functional automated test cases were performed.

We helped at all layers of the solution architecture with the development of automated TDD test cases. When all development units signaled that they were done with their development, all units would run the automated TDD case. When both the automated regression test and the automated TDD functional test were successful, the next iteration could start.

Reflection and Lessons Learned

The project was a success. With only a three-week delay in an eleven-month project, it went into production including all required functionality. ProArchCon was able to improve stability across all delivery units and set up structures that made subsequent projects easier.

Success Factors

The project benefitted from the existence of a substantial code base prior to the changes and from having developers on board who were familiar with this code base. If the code base and the developers did not exist, it is not likely that such a granular level of control at the developer level would have been possible.

A great deal of the change that we were able to implement was possible because there was project management in place that trusted its developers. If there is a tendency in project management to micro-manage or a lack of responsibility in the senior developers, it would not have been possible to implement such a granular TDD.

Lessons Learned

The conditions may not be the same from project to project, but the following lessons learned are widely applicable:

  1. Stability before Functionality / Software stability first! A good deal of problems in the project were caused by wild development and poor communication between the delivery units. Before new functionality can be implemented in a heterogenous environment, it is important to establish a stable and functioning baseline. By constant/continuous comparison to this baseline, the development and troubleshooting of new functionality becomes easier. Test automation substantially improves stability, reduces test cycle durations, and eliminates manual test errors.

  2. Test-Driven Development Defines Requirements for the Developers In scrum / agile projects there is a lot of effort spent on defining acceptance criteria. Acceptance criteria is useful in defining the use cases of a software or application, but in environments where there is a large proportion of service layer and backend development, a “use case” of the software may be too abstract for development. By defining the results of functional tests of the software at each layer, the requirement to development is much clearer for the developers.

  3. Automate Every Step Implementing test automation across all layers in the software architecture for a project aligns the efforts of the developers and allows for quick assessment of project status and quality. By automating regression test, a project can guarantee stability between and beyond implementation cycles. By automating the functional test in the TDD context, the developers can gain speed in checking their progress and knowing when development is ready for the next steps in the process.


Tom Allen is the general manager and senior technologist at ProArchCon GmbH, an IT service provider based in North-Rhein Westphalia in Germany. ProArchCon has created the Test and Automation Framework – Nuts. The TAF-Nuts includes the Nuts software for coordinating test and automation across all layers of a software architecture as well as the necessary consulting services to improve quality in general. Test automation is the key to software stability and reliability. For more information about TAF-Nuts see:

Tom Allen is also an associate partner with the SolutionHeads EEIG – the group’s senior technologist. More information about the SolutionHeads can be found here:

This Article is available for download here:

Continuous Testing Better DevOps
Download PDF • 108KB

94 Ansichten0 Kommentare

Der Anfang: Revenue Assurance

Revenue Assurance (RA) ist ein Spezialbegriff der Telekommunikationsbranche, für den ich bisher in keiner Sprache eine Übersetzung gefunden habe. Direkt übersetzt: Einnahmensicherung? Eigentlich ein ganz normaler Prozess, jede Firma sollte sicherstellen, daß sie Ihre Leistungen auch von den Kunden bezahlt bekommt. Oftmals wurde RA als das reine Zählen von CDRs (Call Data Records, die einzelnen Abrechnungssätze für Telefonieleistungen aller Art) angesehen, jedoch hat immer schon wesentlich mehr zu dem Thema gehört. Ein paar simple, aber leicht nachzuvollziehende Beispiele:

  • Endet ein gewährter Rabatt nach der festgelegten Zeitspanne

  • Erhält jeder Kunde eine (korrekte, zeitnahe) Rechnung und zahlt diese auch

  • Ist die Kombination von Tarifen und Diensten gültig

Natürlich auch alles Fragestellungen, die in anderen Industriezweigen eine Rolle spielen.

Viele Kontrollen gab es in diversen Unternehmen selbstverständlich schon vorher, auch wenn man das anders nannte, vielleicht ganz simpel „Controlling“. Unter anderem das Telemanagement Forum – TMF ( hat frühzeitig begonnen, einheitliche Standards wie Metriken und KPIs im RA Umfeld zu definieren. So konnten die Unternehmen von den Erfahrungen anderer profitieren und sich sogar vergleichen. Z.B. bei der Kennzahl „%-Wert der wiederhergestellten Verluste“. (Für weitere Details zu Metriken, KPIs oder dem gesamten RA Framework des TMF kann man sich direkt an das TMF wenden, ich stehe aber auch gerne für Gespräche und Informationen zur Verfügung).

Das TMF führt seit über 8 Jahren regelmäßig Umfragen zum Stand RA weltweit durch, auf die soeben fertiggestellte Umfrage soll später noch eingegangen werden.

Nächste Schritte

Aber nicht nur die Umsätze müssen sichergestellt werden, auch die Kosten die verursacht werden müssen auf ihre Richtigkeit hin überprüft werden. Neudeutsch: Cost Assurance. Bei Telekommunikationsfirmen können diese in einem erheblichen Maße entstehen. Und sogar wenn kein eigenes Netz vorhanden ist sind das sämtliche Vordienstleister, zudem Roaming Gebühren, Sonderrufnummern (z.B.0800) aber heutzutage auch Dienstleistungen aller Art, die über die Telefonrechnung mit bezahlt werden (mobile payments). So hat das Telekommunikationsunternehmen Beziehungen sowie Geldströme mit allen möglichen anderen Firmen, die geprüft werden müssen. Viele Telekommunikationsfirmen betrachten diesen Bereich der Prüfungen als einen festen Bestandteil von RA.

Mit dem Vorhandensein aller Daten zu Kosten und Umsätzen ist dann der Schritt zur Margenanalyse leicht gemacht. Je nach Aufwand und Toolunterstützung kann man dies bis auf Einzelkundenanalyse herunterbrechen und hat somit auch gleich eine Kundenwertanalyse mit implementiert, die als Basis weiterer Kundenkontakte und gezielter Marketingmaßnahmen herangezogen werden kann.

Die Konsequenz: Business Assurance

Im Rahmen ständig neuer Anforderungen an TK-Firmen (Digitalisierung, neue Technologien wie 5G / SD-WAN) werden seit einiger Zeit weitere Sicherungsmaßnahmen sowie Risikodisziplinen unter dem erweiterten Dach der Business Assurance zusammengefasst.

Etwa 90% der Teilnehmer der letzten RA Umfrage des TMFs sehen Business Assurance (BA) als natürliche Erweiterung von RA:

Abbildung 1 / Frage 5

Nach aktueller TMF Definition gehören folgende Disziplinen zu Business Assurance:

  • Revenue / Cost Assurance Management

  • Fraud Management

  • Asset Assurance Management

  • Margin Assurance Management

  • Migration Assurance Management

  • Transformation Assurance Management

  • Ecosystem Assurance Management

  • Regulatory Assurance Management

  • Customer Experience Management

Der bisher aufgezeigte Evolutionspfad zeigt, dass BA Experten deutlich mehr machen, als Datensätze zu zählen. RA war immer schon ein Bereich, der mit nahezu allen Abteilungen des Unternehmens kommunizieren musste. Z.B. der Finanzbereich – hier ist RA nach der letzten Umfrage auch in etwa 60% der befragten Firmen aufgehangen / IT und Billing / Marketing bei der Einführung neuer Produkte/ je nach Unternehmenstruktur mit diversen anderen. Die gezeigte Übersicht der Disziplinen verdeutlicht auch, dass die Anforderungen an eine Business Assurance Abteilung sehr vielfältig sind. Das heißt auch, dass verschiedene Skills und Rollen zu besetzen sind: Business Analysten / Operational Analysten / Daten Analysten / Data Scientisten / …

Auch komplett neue Techniken wie Blockchain, Maschine Learning und AI werden vermutlich zunehmend eine Rolle im Bereich der Business Assurance spielen. Bisher sind die aktiven Überlegungen dazu aber noch gering, hier ist es interessant zu sehen, wie die Entwicklung in der nächsten Umfrage weiter aussehen wird.

Hier die Management Zusammenfassung der diesjährigen Umfrage.

Abbildung 2 / Managementzusammenfassung

Neben des bisher beschriebenen Fakten finden sich hier auch Informationen zur generellen Abdeckung der Prüfungen, den gefundenen Verlusten, den geplanten Erweiterungen sowie einigen anderen Details, alles auch betrachtet in der Entwicklung der vergangenen Jahre.

Die komplette Umfrage ist hier zu finden:

( Weitere Informationen zu Revenue und Business Assurance sind, wie bereits erwähnt auch direkt beim TMF zu erhalten.

Und ich freue mich auf einen regen Austausch zu dem Thema. Danke!

Von RA zu BA
Download PDF • 390KB

This article is also available for

download here >>>>>>>>>


Katrin Tillenburg ist eine unabhängige Beraterin im Telekommunikationsumfeld, spezialisiert auf BSS, Billing und Business Assurance. Seit mehr als 25 Jahren arbeitet sie in internationalen Projekten, ist aktives Mitglied im TMF und Gründungspartner im SolutionHeads Netzwerk.


LinkedIn XING

92 Ansichten0 Kommentare