Today, we’re announcing version v0.59 of the Timefold Platform and updates to the Timefold models.
This new version of the Timefold Platform comes with these platform improvements:
- Score Analysis has been improvedwith clearer insights into how constraints affect solution quality. The table now shows icons for disabled and locked constraints, and includes filters to focus on matched, triggered, or all constraints for easier analysis. See Interpreting dataset results for more information.

- Secrets management is now available, allowing you to securely store and reuse sensitive values such as API keys and tokens without exposing them in configuration screens or logs. Secrets are encrypted, write-only, tenant-scoped, and can be safely referenced in supported integrations like webhooks and external map providers. See Secrets Management for more information.
- API key permissions have been refinedto give you more precise control over what each key can do. Permissions are now better aligned with API operations, making it easier to follow the principle of least privilege and safely support common integration patterns. See API Key permissions for more information.
- Batch deletion of datasets is now supported, allowing you to clean up multiple datasets at once and manage test or outdated data more efficiently.
- Dataset configuration transparency has improved, as the configuration page now shows which configuration profile was used and the effective configuration values applied during the solve.
- Configuration profile editing is safer, with a warning shown when navigating away while there are unsaved changes.
- Various bug fixes and stability improvements, including faster loading of the datasets overview and dataset detail pages, resulting in a more responsive UI, as well as improved validation feedback when submitting invalid JSON through the UI, making errors easier to understand and fix.
Next to that, this new version of the Timefold Platform comes with updates to these Timefold Models:
Field Service Routing (v1 | Stable)
- Added weights to visit preferred vehicles: You can now specify weights for a vehicle preference inVisit.preferredVehiclesWeights. This enhancement allows you to express which vehicle is more preferred than another. Please see Preferred vehicles section for more details.
Employee Shift Scheduling (v1 | Stable)
- Granular demand based scheduling: We have added more granularity to the hourly demand rules, by adding 3 new fields:includeEmployeeTags,excludeEmployeeTagsandemployeeTagMatches. These fields can be used to set an hourly demand for certain employees, by inclusion or exclusion based on tags. See Demand-based scheduling for details.
- Employee shift tag validation: We have added a warning when we detect preferredShiftTags or requiredShiftTags on employees and no shiftTagMatchRule in the global rules. This warning indicates that the preferred and required shifts will be ignored if no shiftTagMatchRules are present.
- Duplicate ID validation: We have improved the way we handle duplicate ids. Runs now fail more cleanly if duplicate ids are detected. See Changelog for details.
- Warning on near-maximum number of shifts: We have added a warning that is triggered when the input dataset contains more than 80% of the maximum number of shifts (400.000) supported by the model.
Pick-up & Delivery Routing (v1 | Preview)
- Naming changes in the input API: To improve clarity and consistency in the input API, we have updated the way capacities and demands are defined for drivers and jobs. More details in the upgrade guide.
- Extended output information: The output now contains information about unassigned jobs and their stops. It was also extended with additional information about the driver’s vehicle’s previous and current load for each stop in an output itinerary. This additional information can help users to better understand the solution and analyze the performance of their routing model. See Changelog for details.
- Raising the maximum number of threads: The maximum number of threads that the solver can use has been increased from 1 to 6. This enables multithreaded solving in the PDR model and can lead to faster solving times and improved performance for larger and more complex problems.
Please let us know if you have feedback.