System integration
Note
Started in week 09 with a large update in week 11. I'll update this page to reflect more on the sub systems that make up RoboDuck 3000 (mentioned as the device).
C4 model of RoboDuck 3000 www.c4

The C4 model points out several sub systems which are described below in more detail
technology can of course still change
Navigation
Goal: to get the device from A to B.
Controls: motor, rudder
Public interfaces
void upd_CurrentLocation (gps location) {
// stores current location as current location
}
void navigate(gps destination, int max_speed, bool geofenced) {
// loop
// calculate speed/ direction to reach destination based on current location
// ERROR: if geofenced AND destination is outside geofenced
// set power of the motor not exceeding max speed
// set position of ruddder
// end loop
}
void set_Geofence (geofence (list of gps coordinates)) {
// stores geofence
}
Interrupts: none
Object detection
Goal: to detect objects based on image(s)
The objects are: boey, duck, human head in water, boat, swan, ...
Based on the most likely implementation (like using a vision AI model) this module will detect zero or more objects using camera images. Each object will be described by a bounding box, class id and probability.
Even if I use Sensecraft it will listen to commands like invoke and stop. If I'm able to have complete control over it, using other techniques like using a Tensorflow Lite model for instance, than I could implement a similar public interface by myself. So for now I assume this syb system will have some kind of interface.
Public interfaces:
void activate_ObjectDetection(bool logClassification, bool logImages) {
// initializes the vision model and starts interpreting images from the camera
// when log is enabled it means the image and/or classification is stored
}
void stop_ObjectDetection() {
// pretty obvious; stops interpretation of the images
}
Private functions:
void detectObjects() {
// based on the output of the AI model it decides per possible object on
// - probability as integer number (between 0 and 100)
// - distance in meters (which depends on the part of the object inside the image)
}
Interrupts: calls function detectedObjects() from the Control Room sub system when something is found
Sound & Visuals
Goal: responsible for all lights above (ships, vessels) and under water
Public interfaces:
void makeQuackSound() {
// makes a quack sound
}
void flashBack() {
// shows a bright light inside the duck that flashes for 4 seconds
}
(optional)
void enable/disableNavigationLights() {
}
void enable/disableGuidanceLights() {
}
void enableLowBatteryWarning() {
}
The brain
Goal: Combine information and guide the different sub systems
Public interfaces:
void detectedObjects(objects) {
// objects is an array/list each with bounding boxes and probablity
// The objects are: boey, duck, human head in water, boat, swan, ...
}
void processCommand(String cmd) {
// starts to behave according to the command
// cmd contains a pre defined command like
// as a general acknowledgement it will flahs the back of the duck
// GENERAL
// `stop` = stops everything; no movement; no images etc; just floats
// `come home` = stops everything; navigates to the home coordinates
// `update geofence c1, c2, c3, c4, ..., cz`
// `
// STEALTH MODE
// '
// `follow route` = stops everything; navigates according to predefined route
// `...`
}
Storage
Goal: responsible for storing and retrieving different kind of data
Public interfaces:
void logImage (binary image, coordinates) {
// logs image with date, time, coordinates
// maybe that's possible inside the image or a seperate log file is necessary
}
Remote control
Goal: provides a way to give commands to the duck from a distance.
***assumption is that there is already a connection with the device
Public interfaces: