Navigating the ethical boundaries of independent security research: A comprehensive analysis of the consumer pet tracking ecosystem

research hardware IoT

Pet GPS trackers collect far more data about their owners than most people realize – from daily routines to home addresses. In this post, Gerhard Hechenberger and Bernhard Gründling of the SEC Consult Vulnerability Lab assess the hardware and software security of four popular GPS trackers available on the European market, analyze the physical and software attack surfaces, and present a methodology for conducting this kind of research ethically. No exploitable vulnerabilities were found during the given time of this research project, but the complexity of the devices and the privacy implications of constant location tracking deserve closer attention, especially with the EU Cyber Resilience Act on the horizon.

No dogs were harmed during this research.

No dogs were harmed during this research.

GPS trackers are everywhere – the market of pet trackers alone is projected to hit $3.7 billion this year. But what data are these devices actually collecting and how secure are they really? These systems operate through an infrastructure including low-power hardware, cellular connectivity, cloud backend services, and multi-platform user interfaces. While these products are sold on the promise of peace of mind for animal owners, the underlying reality is that they create a lot of data that often tracks human activities more extensively than those of the animals they monitor. 

Independent researchers are often the only ones actually checking these products, acting as a decentralized audit mechanism for consumer technologies, catching what vendors and regulators may miss. However, this research exists within a complex and often hostile legal environment. In jurisdictions such as Austria and across the European Union, the distinction between a legitimate security assessment and a criminal "hack" is often defined by the presence of explicit authorization. This blog post details a short time-boxed security assessment of four representative GPS tracking devices, focusing on hardware and software layers, and laying out a framework for ethical research along the way. This analysis demonstrates how security professionals can navigate the line between scientific research and unauthorized access. 

The privacy-security connection in companion animal technology 

Our motivation for investigating pet tracking ecosystems is the realization that the security (and safety) of the animal is linked to the privacy of the human owner. Empirical research into pet wearables indicates that these devices collect significantly more personal data than they need to track your dog. Because pets are almost exclusively co-located with their human owners, the GPS coordinates of a pet serve as an accurate proxy for the owner’s home address, daily routines, and work schedules. 

Access to historical location logs allows for the derivation of a household’s "pattern of life," enabling malicious actors to identify periods when a residence is likely to be unoccupied. This creates risks ranging from targeted burglaries to domestic stalking. Beyond physical security, there is the concern regarding the processing of this data by commercial third parties. An insurance company could look at your dog's activity data and draw conclusions about your own health or lifestyle. 

Despite these risks, consumers are largely unaware of them. Studies involving hundreds of participants in the UK, USA, and Germany have shown that while pet owners express generalized concern about cyberattacks targeting their pet technology, they take significantly fewer precautions with these devices compared to their traditional computing environments. It doesn't help that many of these apps have vague or non-compliant privacy policies. Analysis has shown that many pet tech applications communicate with third-party trackers before a user can even provide informed consent, violating the principles of the General Data Protection Regulation (GDPR).

Privacy implications

Data Type Primary Function Secondary Privacy Implication
Real-time GPS Pet recovery and safety Real-time tracking of the owner's movements
Historical Geofencing Notifications for "safe zones" Identification of home, work, and frequently visited locations
Activity/Health Logs Monitoring fitness and vitality Inference of owner mobility and lifestyle habits
Media Uploads Profile personalization Metadata leakage (EXIF); photos revealing indoor environments
Account Metadata Billing and notification services Exposure of PII (email, phone, financial status)
Device Logs Debugging and support Disclosure of local Wi-Fi SSIDs and internal network architecture

Methodology: Defining the Line in Ethical Research 

The central challenge of independent security research is demonstrating the existence of a vulnerability while remaining within the bounds of legal and ethical behavior. This requires a distinction between "safe" and "intrusive" testing classes. 

Safe testing classes for researchers 

Safe tests are those that can be performed using the researcher's own hardware and accounts, ensuring that no third-party data is ever accessed or modified. These include: 

  • Passive Reconnaissance: The identification of server versions, exposed headers (e.g., Server, X-Powered-By), and the analysis of public metadata. This does not involve active probing of the application logic.
  • Client-Side Static Analysis: The inspection of the application's source code and, where legally permissible, the reverse engineering of the mobile application binary (APK or IPA). This allows for the identification of hardcoded API keys, insecure storage patterns, and logic flaws within the researcher's own local environment. Note that binary reverse engineering may be restricted by a vendor's EULA, and the legal framework varies by jurisdiction. The EU's Software Directive (2009/24/EC) permits it under specific conditions, while other regions apply stricter or more ambiguous rules.
  • Network Traffic Analysis (MITM): Using a proxy (e.g., Burp Suite) to intercept and analyze the traffic between the researcher’s device and the backend server. As long as the researcher is only analyzing traffic from their own authorized session, this is a legitimate method for understanding API structures.
  • Authorization Cross-Checking (The Two-Account Method): Testing for Broken Object Level Authorization (BOLA) or Insecure Direct Object References (IDOR) by purchasing two separate trackers with two separate accounts and attempting cross-account access. We describe this approach in detail below. 

Intrusive testing classes requiring explicit permission 

Intrusive tests are those that pose a risk to the availability, integrity, or confidentiality of the target system and its users. These are problematic to assess in independent research without a prior agreement: 

  • Denial of Service (DoS) Testing: Any test intended to measure the resilience of the system against high volumes of traffic or complex, resource-heavy requests.
  • Automated Brute-Forcing: The use of high-speed automated tools to guess credentials or identifiers. This can lead to account lockouts for legitimate users and unnecessary load on the target infrastructure.
  • Active Exploitation of Injection Flaws: While identifying a potential SQL injection by entering a single quote and observing an error message is often acceptable, the actual execution of a payload to extract the database is problematic.
  • Social Engineering: Targeting the employees or customer support of the manufacturer to gain access or information. This is inherently deceptive and falls outside the scope of technical research. 

SEC Consult Hardware Lab: Analysis of Physical Attack Vectors 

The security posture of a GPS tracker is not determined solely by its cloud interface; the physical hardware often provides the initial foothold for an attacker. In SEC Consult’s hardware lab, research involves identifying and exploiting physical attack vectors that might lead to confidential information disclosure, device compromise, or as a consequence even to the compromise of cloud infrastructure or other devices. Common tasks include: 

  • Hardware Analysis: Opening the device for PCB analysis and hardware component identification to identify possible physical attack vectors.
  • Debug Interface Identification: Interfaces that were used during development or production and are not locked afterwards can leak internal information, give access to device memory, or even allow for full device compromise.
  • Inter-Module Communication: Sniffing communication on buses between internal modules might lead to the disclosure of internal information, or even to full device compromise if an attacker is able to alter the information on the bus.
  • Memory Access: Gaining access to data stored on internal or external flash chips via debug interfaces or by directly reading the memory can lead to secrets that compromise the cloud environment or even other devices. 

Most IoT devices rely on standardized protocols for internal communication and debugging. These include: 

  • UART (Universal Asynchronous Receiver-Transmitter): This is the most common serial interface. An open UART port often provides access to a serial console, which might output system logs or even offer a root-level shell without a password.
  • JTAG (Joint Test Action Group) / SWD (Single Wire Debug): These are industry-standard interfaces for testing and debugging. In a security context, they allow a researcher to halt the processor, inspect registers, and dump the entire firmware directly from the chip's memory for bare-metal analysis.
  • SPI (Serial Peripheral Interface): Many trackers store their operating system and configuration data on external SPI flash memory chips. By connecting to the SPI pins, it is possible to dump the flash memory, providing raw binary data for reverse engineering. 

The "Two-Account" method: A framework for ethical authorization testing 

The core of this research is the application of the "Two-Account" method to test for Broken Object Level Authorization. As the leading vulnerability in the OWASP API Top 10, Broken Access Control allows an attacker to access or modify data belonging to another user, often by simply changing an identifier in an API request. 

The mechanism of Broken Access Control 

In a typical Broken Access control and Insecure Direct Object Reference (IDOR) scenario, an application exposes an endpoint such as: 

GET /api/v1/pet/555/location 

If the application only checks if the user is logged in (Authentication) but fails to check if the user owns pet #555 (Authorization), any user could view the location of any pet by changing the ID. 

The execution of the test 

Without explicit permission, accessing Pet #556 means accessing someone else's data – which constitutes unauthorized access. To solve this, the researcher must act as both the "attacker" and the “victim”.

  1. Subscription A: The researcher purchases Tracker A and registers it under Account A.
  2. Subscription B: The researcher purchases Tracker B and registers it under Account B.
  3. Cross-Validation: While authenticated as Account A, the researcher uses a proxy tool to intercept a request for Tracker A's location. They then manually change the ID in the request to match the ID of Tracker B.
  4. Result Interpretation: If the server returns Tracker B’s data, the vulnerability is confirmed. Because the researcher owns both accounts, no third-party data has been touched. 

This methodology provides a "safe harbor". It demonstrates the failure of the authorization logic while adhering to the principle of data minimization – accessing only what is necessary to verify the bug. 

Practical Results: Comparative Analysis of four Devices 

The research project involved a structured assessment of the hardware, mobile, and web applications associated with four popular GPS trackers. While no exploitable vulnerabilities were identified in the final production versions, the hardware analysis revealed a higher degree of architectural complexity than initially anticipated. All devices make use of multiple SoCs and additional modules, often enough two main processors, an LTE modem and a GPS module. Additionally, all of these run very minimal bare-metal firmware and make use of binary communication protocols. Much more time than was available would be needed for a comprehensive assessment. 

Device Manufacturer App Downloads (Play Store)
Fressnapf 100.000+
Weenect 100.000+
PAJ 100.000+
Kippy 100.000+

From a privacy perspective, it was also interesting to analyze where the user data is hosted. By intercepting the application traffic, we analyzed the domains of the API endpoints, revealing the hosting providers. The mobile and web application infrastructure is hosted by the following providers: 

Device Hosting Provider
A Scaleway
B AWS
C Hetzner
D AWS

Device A

Software Results

Device A used a somewhat unconventional approach for session management. The API could be talked to with a fixed Authorization Token for both independent accounts – the only unique secret necessary is the “devicetoken”, which is static for the tracker device and has a lifetime that does not expire. We were able to query data over half a year later with the same parameters – no new session creation was necessary.

Another interesting thing we noticed was that the device uses server infrastructure with domain names related to bike tracking – so it seems that the manufacturer repurposes this environment for pet tracking, just with another front-end.

To assess the risk of token prediction, the mathematical randomness of the tokens was analyzed. The tokens in Device A had sufficient entropy to prevent brute-force prediction attacks, effectively mitigating the risk of session hijacking via token guessing. However, should an attacker obtain the device token in another way, access is possible forever. 

Hardware Results

The device is built upon a Quectel BG770A-GL LTE modem and a Nuvoton M2354LJFAE microcontroller. A debug footprint was successfully identified for the Nuvoton microcontroller, and we performed a memory readout via the unlocked Serial Wire Debug (SWD) interface. As the analysis of bare-metal firmware is time consuming, no in-depth analysis could be executed. A quick initial analysis showed no interesting strings. Additionally, we successfully captured UART communication between the microcontroller and the modem and of the modem itself. The communication traffic consisted of standard AT commands such as QISEND and QIRD. The findings suggest a binary protocol for server communication, which again would need much more effort for further analysis. 

Figure 1: Device A, a Quectel modem and Nuvoton microcontroller 

Device B 

Software Results 

Device B used JSON Web Tokens (JWT) for session authorization, which is dependent on the integrity of the signing process and correct server-side validation. 

We tested for common JWT implementation flaws, such as the "none" algorithm attack and the use of weak signing secrets. We found the implementation to be secure, with the backend correctly rejecting tokens with modified headers or signatures. Furthermore, the authorization checks for Device B covered the entire user lifecycle: 

  • Tracker Registration: The process of claiming a new tracker was verified to require a unique, non-predictable identifier, preventing "tracker squatting" where an attacker could claim a device before the legitimate owner.
  • History Access: The API correctly validated that historical location data could only be retrieved by the authenticated owner of the specific tracker.
  • Billing Security: Access to invoices was strictly restricted. An attempt to access an invoice using a different account’s token did not work. 

Additionally, the password reset flow was analyzed to identify classic account takeover vulnerabilities. This did not yield any results. 

Hardware Results 

Device B is based on an ESP8285 and an STC STC8G2K64S4 microcontroller. Unlike Device A, although some footprints likely used for development were identified, no access to the device could be achieved. Additionally, we were unable to identify internal communication between different parts. 

Figure 2: Device B, ESP8285 module and STC microcontroller 

Device C 

Software Results 

Device C was similar to device B, as it also used JWT for session management. 

The assessment of device C also prioritized authorization, checking whether one user could trigger commands on another user’s device. This is an interesting area for pet tech, as the ability to remotely trigger sound or change names could be used for harassment. The results confirmed that the backend validated the ownership relationship for every command execution.

Besides that, we also checked authorization for location history, invoices, order PDFs, and tracker status. No issue was identified regarding these APIs. 

Hardware Results

Built on a Simcom A7670G modem – SEC Consult already has experience with that manufacturer – Device C proved to be the most monolithic one. We identified a system UART interface that yielded a boot log. Additionally, another UART was identified that is dedicated to communicating in the GNSS NMEA format to read location data from the GPS module in real-time. We were also able to activate the downloading mode to program the device by shorting the UBOOT pin to GND. Using this mode, further exploitation might be feasible but needs more time. 

Figure 3: Device C, built on a Simcom modem 

Device D 

Software Results 

Device D included a feature for uploading pet images, which we looked at more thoroughly. The authorization checks were performed for all critical features – no vulnerabilities were identified here either. We also assessed the password reset flow – as with the other devices – no issues were found here. 

Hardware Results 

This device again uses a multi-chip architecture with an ESP8285 and an nRF52832 microcontroller. Debug footprints were identified for both microcontrollers. We successfully connected to the ESP8285 UART (providing a boot log) and read out the memory using esptool. Similar to Device A, a quick analysis of the dumped memory did not immediately reveal interesting plain-text strings. There was not enough time for a comprehensive analysis of the firmware that is again time consuming due to its bare-metal nature.

Figure 4: ESP8285 module and nRF52832 microcontroller 

Conclusion 

The results of this analysis suggest that while technical security controls are becoming more standard, the privacy architecture of the pet tracking industry remains problematic. 

The hardware assessment across all four trackers revealed no standard architecture but complex systems made from multiple microcontrollers and communication modules. This goes as far as two main processing units, a GPS module, and an LTE module with eSIM, all of them running some variant of RTOS or bare-metal firmware. Therefore, designs are more complex than typical IoT devices that are often based on a single SoC running a standard embedded Linux. Whoever is interested in breaking such devices with physical access should dedicate multiple days to that task. 

As billions of devices become co-located with our daily lives, the boundary between animal and human data will continue to blur. Independent security research is an important mechanism capable of providing the transparency required to protect consumers in this new reality. 

The methodology demonstrated in this blog post proves that it is possible to conduct high-impact security research without "crossing the line" into unethical activity. The fact that the assessed devices were not found exploitable in this instance shows the effectiveness of current baseline standards. Although we could not break these devices "in a day", in-depth assessments of such devices with enough resources (or even better including whitebox methods like source code reviews) are highly advised to create a robust and secure base. This is even more important as by December 2027 the CRA (Cyber Resilience Act) will enter into force, demanding cybersecurity requirements for all products sold within the EU market. Moreover, the inherent privacy risks of constant location tracking remain a fundamental issue that technical security alone cannot solve and must be supported by organizational measures. 

We hope this kind of work encourages other researchers to approach consumer IoT the same way – by operating with transparency, respecting the privacy of third parties, and focusing on systemic logic flaws rather than destructive exploits, researchers can ensure that the "peace of mind" marketed by pet tech manufacturers is built on a foundation of genuine security and respected privacy. 

 

This blog post was produced by the technical research team consisting of Gerhard Hechenberger and Bernhard Gründling at SEC Consult Vulnerability Lab, with project coordination by Adriane Würfl. Special thanks go to Thea, whose daily walks provided the real-world GPS tracking data that made this research possible. The results reflect the state of the devices as of the date of analysis.

About the author

Bernhard Gründling
SEC Consult
Senior Security Consultant

Bernhard Gründling is a Senior Security Consultant at SEC Consult. As a member of the SEC Consult Vulnerability Lab, he conducts security research on consumer products and applications, highlighting real-world security and privacy risks for everyday users. Besides that, he focuses on all topics concerning IT infrastructure, including security assessments of Windows systems, Active Directory and Unix-based systems.

Gerhard Hechenberger
SEC Consult
Principal Security Consultant

Gerhard is a Principal Security Consultant at SEC Consult who specializes in embedded systems and OT security and works in the SEC Consult Hardware Laboratory in Vienna. His main job is the assessment of embedded systems, IoT/OT devices and OT networks to uncover vulnerabilities. He is a holder of several IT security certificates and published multiple security advisories and blog posts.