Compare commits

..

7 commits

Author SHA1 Message Date
ookami125
f2ef9bbbc3 Schedule interval improvement 2025-05-03 00:31:26 -04:00
ookami125
04cb434a60 Fix for bool and includes 2025-05-02 22:26:04 -04:00
ookami125
7d1962307c Added readme and license files 2025-04-18 12:00:02 -04:00
ookami125
4aa7fc1ff4 Fixes for #1 and #2 and basic rls implemetation 2025-04-17 01:32:59 -04:00
ookami125
9456d51538 Got blackholio working 2025-04-12 18:19:07 -04:00
ookami125
c64475b0a4 changes needed for 0.14.0 2025-04-10 00:51:03 -04:00
ookami125
5fa0db871d updated to 0.14.0 2025-04-08 00:02:08 -04:00
14 changed files with 2076 additions and 440 deletions

201
LICENSE-APACHE-2.0 Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

373
LICENSE-MPL-2.0 Normal file
View file

@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at https://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

34
README.md Normal file
View file

@ -0,0 +1,34 @@
## Badges
Add badges from somewhere like: [shields.io](https://shields.io/)
[![MPL-2.0 License](https://img.shields.io/badge/License-MPL--2.0-green.svg)](https://choosealicense.com/licenses/mpl-2.0/) [![apache-2.0 License](https://img.shields.io/badge/License-apache--2.0-yellow.svg)](https://choosealicense.com/licenses/apache-2.0/)
# SpacetimeDB-Zig
This is an example implementation of a zig module for SpacetimeDB
## Authors
- [@ookami125](https://www.github.com/ookami125)
- [@suirad](https://github.com/suirad)
## Run Locally
```bash
git clone https://github.com/ookami125/SpacetimeDB-Zig
cd SpacetimeDB-Zig
nohup spacetime start &
./spacetimedb.sh publish
./spacetimedb.sh logs -f
```
## License
For the main.zig file which is reimplementation of blackholio we're using the SpacetimeBD license for blackholio.
[Apache-2.0](https://github.com/clockworklabs/Blackholio/blob/master/LICENSE)
All other files in the repo are under [MPL-2.0](https://github.com/ookami125/SpacetimeDB-Zig/blob/master/LICENSE-MPL-2.0)

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
const std = @import("std"); const std = @import("std");
// Although this function looks imperative, note that its job is to // Although this function looks imperative, note that its job is to
@ -21,7 +23,7 @@ pub fn build(b: *std.Build) void {
const optimize = b.standardOptimizeOption(.{}); const optimize = b.standardOptimizeOption(.{});
const lib = b.addExecutable(.{ const lib = b.addExecutable(.{
.name = "stdb-zig-helloworld", .name = "blackholio",
.root_source_file = b.path("src/main.zig"), .root_source_file = b.path("src/main.zig"),
.target = target, .target = target,
.optimize = optimize, .optimize = optimize,

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
.{ .{
// This is the default name used by packages depending on this one. For // This is the default name used by packages depending on this one. For
// example, when a user runs `zig fetch --save <url>`, this field is used // example, when a user runs `zig fetch --save <url>`, this field is used
@ -6,16 +8,29 @@
// //
// It is redundant to include "zig" in this name because it is already // It is redundant to include "zig" in this name because it is already
// within the Zig package namespace. // within the Zig package namespace.
.name = "spacetimedb-zig", .name = .spacetimedb_zig,
// This is a [Semantic Version](https://semver.org/). // This is a [Semantic Version](https://semver.org/).
// In a future version of Zig it will be used for package deduplication. // In a future version of Zig it will be used for package deduplication.
.version = "0.0.0", .version = "0.0.0",
// This field is optional. // Together with name, this represents a globally unique package
// This is currently advisory only; Zig does not yet do anything // identifier. This field is generated by the Zig toolchain when the
// with this value. // package is first created, and then *never changes*. This allows
//.minimum_zig_version = "0.11.0", // unambiguous detection of one package being an updated version of
// another.
//
// When forking a Zig project, this id should be regenerated (delete the
// field and run `zig build`) if the upstream project is still maintained.
// Otherwise, the fork is *hostile*, attempting to take control over the
// original project's identity. Thus it is recommended to leave the comment
// on the following line intact, so that it shows up in code reviews that
// modify the field.
.fingerprint = 0xc7c3fc1d5cbdcdec, // Changing this has security and trust implications.
// Tracks the earliest Zig version that the package considers to be a
// supported use case.
.minimum_zig_version = "0.14.0",
// This field is optional. // This field is optional.
// Each dependency must either provide a `url` and `hash`, or a `path`. // Each dependency must either provide a `url` and `hash`, or a `path`.
@ -27,7 +42,8 @@
//.example = .{ //.example = .{
// // When updating this field to a new URL, be sure to delete the corresponding // // When updating this field to a new URL, be sure to delete the corresponding
// // `hash`, otherwise you are communicating that you expect to find the old hash at // // `hash`, otherwise you are communicating that you expect to find the old hash at
// // the new URL. // // the new URL. If the contents of a URL change this will result in a hash mismatch
// // which will prevent zig from using it.
// .url = "https://example.com/foo.tar.gz", // .url = "https://example.com/foo.tar.gz",
// //
// // This is computed from the file contents of the directory of files that is // // This is computed from the file contents of the directory of files that is

View file

@ -1,6 +0,0 @@
#!/bin/bash
spacetime logout
spacetime login --server-issued-login local
spacetime publish -y --server local --bin-path=zig-out/bin/stdb-zig-helloworld.wasm
DB_HASH=$(spacetime list 2>/dev/null | tail -1)
spacetime logs $DB_HASH

View file

@ -1,15 +1,17 @@
#!/bin/bash #!/bin/bash
# Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
DB_HASH=$(spacetime list 2>/dev/null | tail -1) DB_HASH=$(spacetime list 2>/dev/null | tail -1)
func=$1; func=$1;
shift; shift;
if [[ "$func" == "publish" ]]; then if [[ "$func" == "publish" ]]; then
zig build -freference-trace=100 || exit 1 zig build -freference-trace=100 || exit 1
spacetime logout #spacetime logout
spacetime login --server-issued-login local spacetime login --server-issued-login local
spacetime publish -y --server local --bin-path=zig-out/bin/stdb-zig-helloworld.wasm spacetime publish -y --server local --bin-path=zig-out/bin/blackholio.wasm blackholio
DB_HASH=$(spacetime list 2>/dev/null | tail -1) DB_HASH=$(spacetime list 2>/dev/null | tail -1)
spacetime logs $DB_HASH spacetime logs $DB_HASH -n 15
exit $? exit $?
fi fi

View file

@ -1,135 +1,791 @@
// Copyright 2025 Clockwork Labs, Licensed under Apache-2.0
// Copyright 2025 Tyler Peterson, Licensed under Apache-2.0
const std = @import("std"); const std = @import("std");
const spacetime = @import("spacetime.zig"); const spacetime = @import("spacetime.zig");
const utils = @import("spacetime/utils.zig");
comptime { _ = spacetime; } comptime { _ = spacetime; }
const stdb_math = @import("spacetime/math.zig");
const DbVector2 = stdb_math.DbVector2;
const DbVector3 = stdb_math.DbVector3;
const ScheduleAt = spacetime.ScheduleAt;
const START_PLAYER_MASS: u32 = 15;
const START_PLAYER_SPEED: u32 = 10;
const FOOD_MASS_MIN: u32 = 2;
const FOOD_MASS_MAX: u32 = 4;
const TARGET_FOOD_COUNT: usize = 600;
const MINIMUM_SAFE_MASS_RATIO: f32 = 0.85;
const MIN_MASS_TO_SPLIT: u32 = START_PLAYER_MASS * 2;
const MAX_CIRCLES_PER_PLAYER: u32 = 16;
const SPLIT_RECOMBINE_DELAY_SEC: f32 = 5.0;
const SPLIT_GRAV_PULL_BEFORE_RECOMBINE_SEC: f32 = 2.0;
const ALLOWED_SPLIT_CIRCLE_OVERLAP_PCT: f32 = 0.9;
//1 == instantly separate circles. less means separation takes time
const SELF_COLLISION_SPEED: f32 = 0.05;
pub const std_options = std.Options{ pub const std_options = std.Options{
.log_level = .debug, .log_level = .debug,
.logFn = spacetime.logFn, .logFn = spacetime.logFn,
}; };
const TableAttribs = struct { pub const spacespec = spacetime.Spec{
scheduled: ?[]const u8, .tables = &.{
autoinc: ?[]const []const u8, spacetime.Table{
primary_key: ?[]const u8, .name = "config",
unique: ?[]const []const u8, .schema = Config,
.attribs = .{
.access = .Public,
.primary_key = "id",
}
},
spacetime.Table{
.name = "entity",
.schema = Entity,
.attribs = .{
.access = .Public,
.primary_key = "entity_id",
.autoinc = &.{ "entity_id", },
}
},
spacetime.Table{
.name = "circle",
.schema = Circle,
.attribs = .{
.access = .Public,
.primary_key = "entity_id",
.autoinc = &.{ "entity_id", },
.indexes = &.{ .{ .name = "player_id", .layout = .BTree }, },
}
},
spacetime.Table{
.name = "player",
.schema = Player,
.attribs = .{
.access = .Public,
.primary_key = "identity",
.autoinc = &.{ "player_id", },
.unique = &.{ "player_id", },
}
},
spacetime.Table{
.name = "logged_out_player",
.schema = Player,
.attribs = .{
.access = .Public,
.primary_key = "identity",
.unique = &.{ "player_id", },
}
},
spacetime.Table{
.name = "food",
.schema = Food,
.attribs = .{
.access = .Public,
.primary_key = "entity_id",
}
},
spacetime.Table{
.name = "move_all_players_timer",
.schema = MoveAllPlayersTimer,
.attribs = .{
.primary_key = "scheduled_id",
.autoinc = &.{ "scheduled_id", },
.schedule = "move_all_players",
}
},
spacetime.Table{
.name = "spawn_food_timer",
.schema = SpawnFoodTimer,
.attribs = .{
.primary_key = "scheduled_id",
.autoinc = &.{ "scheduled_id" },
.schedule = "spawn_food",
}
},
spacetime.Table{
.name = "circle_decay_timer",
.schema = CircleDecayTimer,
.attribs = .{
.primary_key = "scheduled_id",
.autoinc = &.{ "scheduled_id" },
.schedule = "circle_decay",
}
},
spacetime.Table{
.name = "circle_recombine_timer",
.schema = CircleRecombineTimer,
.attribs = .{
.primary_key = "scheduled_id",
.autoinc = &.{ "scheduled_id" },
.schedule = "circle_recombine",
}
},
spacetime.Table{
.name = "consume_entity_timer",
.schema = ConsumeEntityTimer,
.attribs = .{
.primary_key = "scheduled_id",
.autoinc = &.{ "scheduled_id" },
.schedule = "consume_entity",
}
}
},
.reducers = &.{
spacetime.Reducer(.{
.name = "init",
.lifecycle = .Init,
.func = &init,
}),
spacetime.Reducer(.{
.name = "client_connected",
.lifecycle = .OnConnect,
.func = &connect,
}),
spacetime.Reducer(.{
.name = "client_disconnected",
.lifecycle = .OnDisconnect,
.func = &disconnect,
}),
spacetime.Reducer(.{
.name = "enter_game",
.params = &.{ "name" },
.func = &enter_game,
}),
spacetime.Reducer(.{
.name = "respawn",
.func = &respawn,
}),
spacetime.Reducer(.{
.name = "suicide",
.func = &suicide,
}),
spacetime.Reducer(.{
.name = "update_player_input",
.func = &update_player_input,
.params = &.{ "direction", },
}),
spacetime.Reducer(.{
.name = "move_all_players",
.func = &move_all_players,
.params = &.{ "_timer", },
}),
spacetime.Reducer(.{
.name = "consume_entity",
.func = &consume_entity,
.params = &.{ "request", },
}),
spacetime.Reducer(.{
.name = "player_split",
.func = &player_split,
}),
spacetime.Reducer(.{
.name = "spawn_food",
.func = &spawn_food,
.params = &.{ "_timer", },
}),
spacetime.Reducer(.{
.name = "circle_decay",
.func = &circle_decay,
.params = &.{ "_timer", },
}),
spacetime.Reducer(.{
.name = "circle_recombine",
.func = &circle_recombine,
.params = &.{ "_timer", },
})
},
.row_level_security = &.{
"SELECT * FROM logged_out_player WHERE identity = :sender"
}
}; };
const TableAttribsPair = struct { pub const Config = struct {
schema: type, id: u32,
attribs: TableAttribs, world_size: u64,
}; };
comptime { pub const Entity = struct {
var attributeList: []const TableAttribsPair = &.{}; entity_id: u32,
} position: DbVector2,
mass: u32,
};
fn removeComptimeFields(data: type) type { pub const Circle = struct {
const typeInfo = @typeInfo(data).@"struct"; entity_id: u32,
var newFields: []const std.builtin.Type.StructField = &.{}; player_id: u32,
direction: DbVector2,
speed: f32,
last_split_time: spacetime.Timestamp,
};
inline for(std.meta.fields(data)) |field| { pub const Player = struct {
if(!field.is_comptime) { identity: spacetime.Identity,
newFields = newFields ++ &[_]std.builtin.Type.StructField{ field }; player_id: u32,
} name: []const u8,
pub fn destroy(self: *@This(), allocator: std.mem.Allocator) void {
allocator.free(self.name);
allocator.destroy(self);
} }
};
return @Type(.{ pub const Food = struct {
.@"struct" = std.builtin.Type.Struct{ entity_id: u32,
.backing_integer = typeInfo.backing_integer, };
.decls = typeInfo.decls,
.fields = newFields,
.is_tuple = typeInfo.is_tuple,
.layout = typeInfo.layout,
}
});
}
fn Table(data: type) spacetime.Table { pub const SpawnFoodTimer = struct {
const fieldIdx = std.meta.fieldIndex(data, "__spacetime_10.0__attribs__");
if(fieldIdx == null) return .{ .schema = data, .schema_name = @typeName(data), };
const attribs: TableAttribs = utils.getMemberDefaultValue(data, "__spacetime_10.0__attribs__");
return .{
.schema = removeComptimeFields(data),
.schema_name = @typeName(data),
.primary_key = attribs.primary_key,
//.schedule_reducer = attribs.scheduled,
.unique = attribs.unique,
.autoinc = attribs.autoinc,
};
}
fn TableSchema(data: TableAttribsPair) type {
const attribs: TableAttribs = data.attribs;
attributeList = attributeList ++ &[1]TableAttribsPair{ data };
var newFields: []const std.builtin.Type.StructField = &.{};
newFields = newFields ++ &[_]std.builtin.Type.StructField{
std.builtin.Type.StructField{
.alignment = @alignOf(TableAttribs),
.default_value = @ptrCast(&attribs),
.is_comptime = false,
.name = "__spacetime_10.0__attribs__",
.type = TableAttribs,
}
};
newFields = newFields ++ std.meta.fields(data.schema);
const newStruct: std.builtin.Type.Struct = .{
.backing_integer = null,
.decls = &[_]std.builtin.Type.Declaration{},
.fields = newFields,
.is_tuple = false,
.layout = .auto
};
return @Type(.{
.@"struct" = newStruct,
});
}
//#[spacetimedb::table(name = move_all_players_timer, scheduled(move_all_players))]
pub const move_all_players_timer = Table(MoveAllPlayersTimer);
pub const MoveAllPlayersTimer = TableSchema(.{
.schema = struct {
scheduled_id: u64, scheduled_id: u64,
scheduled_at: spacetime.ScheduleAt, scheduled_at: spacetime.ScheduleAt,
}, };
.attribs = TableAttribs{
.scheduled = "move_all_players_reducer",
.autoinc = &.{"scheduled_id"},
.primary_key = "scheduled_id",
.unique = &.{},
}
});
pub const Init: spacetime.Reducer = .{ .func_type = @TypeOf(InitReducer), .func = @ptrCast(&InitReducer), .lifecycle = .Init, }; pub const CircleDecayTimer = struct {
pub fn InitReducer(ctx: *spacetime.ReducerContext) !void { scheduled_id: u64,
scheduled_at: spacetime.ScheduleAt,
};
pub const CircleRecombineTimer = struct {
scheduled_id: u64,
scheduled_at: spacetime.ScheduleAt,
player_id: u32,
};
pub const ConsumeEntityTimer = struct {
scheduled_id: u64,
scheduled_at: spacetime.ScheduleAt,
consumed_entity_id: u32,
consumer_entity_id: u32,
};
pub const MoveAllPlayersTimer = struct {
scheduled_id: u64,
scheduled_at: spacetime.ScheduleAt,
};
pub fn init(ctx: *spacetime.ReducerContext) !void {
std.log.info("Initializing...", .{}); std.log.info("Initializing...", .{});
try ctx.db.get("move_all_players_timer").insert(MoveAllPlayersTimer{ _ = try ctx.db.get("config").insert(Config {
.id = 0,
.world_size = 1000,
});
_ = try ctx.db.get("circle_decay_timer").insert(CircleDecayTimer {
.scheduled_id = 0, .scheduled_id = 0,
.scheduled_at = .{ .Interval = .{ .__time_duration_micros__ = 50 * std.time.us_per_ms }} .scheduled_at = ScheduleAt.interval(5, .Seconds),
});
_ = try ctx.db.get("spawn_food_timer").insert(SpawnFoodTimer {
.scheduled_id = 0,
.scheduled_at = ScheduleAt.interval(500, .Milliseconds),
});
_ = try ctx.db.get("move_all_players_timer").insert(MoveAllPlayersTimer {
.scheduled_id = 0,
.scheduled_at = ScheduleAt.interval(50, .Milliseconds),
}); });
} }
pub const move_all_players = spacetime.Reducer{ pub fn connect(ctx: *spacetime.ReducerContext) !void {
.func_type = @TypeOf(move_all_players_reducer), // Called everytime a new client connects
.func = @ptrCast(&move_all_players_reducer), std.log.info("[OnConnect]", .{});
.params = &.{ "timer" } const nPlayer = try ctx.db.get("logged_out_player").col("identity").find(.{ .identity = ctx.sender });
}; if (nPlayer) |player| {
pub fn move_all_players_reducer(ctx: *spacetime.ReducerContext, timer: MoveAllPlayersTimer) !void { _ = try ctx.db.get("player").insert(player);
_ = ctx; try ctx.db.get("logged_out_player").col("identity").delete(.{ .identity = player.identity });
std.log.info("(id: {}) Move Players!", .{timer.scheduled_id}); } else {
return; _ = try ctx.db.get("player").insert(Player {
.identity = ctx.sender,
.player_id = 0,
.name = "",
});
}
} }
// pub const say_hello = spacetime.Reducer{ .func_type = @TypeOf(say_hello_reducer), .func = @ptrCast(&say_hello_reducer)}; pub fn disconnect(ctx: *spacetime.ReducerContext) !void {
// Called everytime a client disconnects
std.log.info("[OnDisconnect]", .{});
const nPlayer = try ctx.db.get("player").col("identity").find(.{ .identity = ctx.sender});
if(nPlayer == null) {
std.log.err("Disconnecting player doesn't have a valid players row!",.{});
return;
}
// pub fn say_hello_reducer(ctx: *spacetime.ReducerContext) !void { const player = nPlayer.?;
// _ = ctx; _ = try ctx.db.get("logged_out_player").insert(player);
// std.log.info("Hello!", .{}); try ctx.db.get("player").col("identity").delete(.{ .identity = ctx.sender});
// return;
// }
// Remove any circles from the arena
var iter = try ctx.db.get("circle").col("player_id").filter(.{ .player_id = player.player_id });
defer iter.close();
while (try iter.next()) |circle_val| {
try ctx.db.get("entity").col("entity_id").delete(.{ .entity_id = circle_val.entity_id, });
try ctx.db.get("circle").col("entity_id").delete(.{ .entity_id = circle_val.entity_id, });
}
}
pub fn enter_game(ctx: *spacetime.ReducerContext, name: []const u8) !void {
std.log.info("Creating player with name {s}", .{name});
var player: ?Player = try ctx.db.get("player").col("identity").find(.{ .identity = ctx.sender });
const player_id = player.?.player_id;
player.?.name = name;
try ctx.db.get("player").col("identity").update(player.?);
_ = try spawn_player_initial_circle(ctx, player_id);
}
fn gen_range(rng: *std.Random.DefaultPrng, min: f32, max: f32) f32 {
return @floatCast(std.Random.float(rng.random(), f64) * (@as(f64, @floatCast(max)) - @as(f64, @floatCast(min))) + @as(f64, @floatCast(min)));
}
fn spawn_player_initial_circle(ctx: *spacetime.ReducerContext, player_id: u32) !Entity {
var rng = ctx.rng;
const world_size = (try ctx
.db.get("config").col("id")
.find(.{ .id = 0, })).?.world_size;
const player_start_radius = mass_to_radius(START_PLAYER_MASS);
const x = gen_range(&rng, player_start_radius, (@as(f32, @floatFromInt(world_size)) - player_start_radius));
const y = gen_range(&rng, player_start_radius, (@as(f32, @floatFromInt(world_size)) - player_start_radius));
return spawn_circle_at(
ctx,
player_id,
START_PLAYER_MASS,
DbVector2 { .x = x, .y = y },
ctx.timestamp,
);
}
fn spawn_circle_at(
ctx: *spacetime.ReducerContext,
player_id: u32,
mass: u32,
position: DbVector2,
timestamp: spacetime.Timestamp,
) !Entity {
const entity = try ctx.db.get("entity").insert(.{
.entity_id = 0,
.position = position,
.mass = mass,
});
_ = try ctx.db.get("circle").insert(.{
.entity_id = entity.entity_id,
.player_id = player_id,
.direction = DbVector2 { .x = 0.0, .y = 1.0 },
.speed = 0.0,
.last_split_time = timestamp,
});
return entity;
}
//#[spacetimedb::reducer]
pub fn respawn(ctx: *spacetime.ReducerContext) !void {
const player = (try ctx
.db.get("player")
.col("identity")
.find(.{ .identity = ctx.sender})).?;
_ = try spawn_player_initial_circle(ctx, player.player_id);
}
//#[spacetimedb::reducer]
pub fn suicide(ctx: *spacetime.ReducerContext) !void {
const player = (try ctx
.db
.get("player")
.col("identity")
.find(.{ .identity = ctx.sender})).?;
var circles = try ctx.db.get("circle").col("player_id").filter(.{ .player_id = player.player_id});
defer circles.close();
while(try circles.next()) |circle| {
try destroy_entity(ctx, circle.entity_id);
}
}
//#[spacetimedb::reducer]
pub fn update_player_input(ctx: *spacetime.ReducerContext, direction: DbVector2) !void {
std.log.info("player input updated!", .{});
const player = (try ctx
.db
.get("player")
.col("identity")
.find(.{ .identity = ctx.sender})).?;
var circles = try ctx.db.get("circle").col("player_id").filter(.{ .player_id = player.player_id});
defer circles.close();
while(try circles.next()) |circle| {
var copy_circle = circle;
copy_circle.direction = direction.normalized();
copy_circle.speed = std.math.clamp(direction.magnitude(), 0.0, 1.0);
try ctx.db.get("circle").col("entity_id").update(copy_circle);
}
}
fn is_overlapping(a: *Entity, b: *Entity) bool {
const dx = a.position.x - b.position.x;
const dy = a.position.y - b.position.y;
const distance_sq = dx * dx + dy * dy;
const radius_a = mass_to_radius(a.mass);
const radius_b = mass_to_radius(b.mass);
// If the distance between the two circle centers is less than the
// maximum radius, then the center of the smaller circle is inside
// the larger circle. This gives some leeway for the circles to overlap
// before being eaten.
const max_radius = @max(radius_a, radius_b);
return distance_sq <= max_radius * max_radius;
}
fn mass_to_radius(mass: u32) f32 {
return @sqrt(@as(f32, @floatFromInt(mass)));
}
fn mass_to_max_move_speed(mass: u32) f32 {
return 2.0 * @as(f32, @floatFromInt(START_PLAYER_SPEED)) / (1.0 + @sqrt(@as(f32, @floatFromInt(mass)) / @as(f32, @floatFromInt(START_PLAYER_MASS))));
}
pub fn move_all_players(ctx: *spacetime.ReducerContext, _timer: MoveAllPlayersTimer) !void {
// TODO identity check
// let span = spacetimedb::log_stopwatch::LogStopwatch::new("tick");
//std.log.info("_timer: {}", .{ _timer.scheduled_id });
_ = _timer;
const world_size = (try ctx
.db.get("config").col("id")
.find(.{ .id = 0 })).?.world_size;
var circle_directions = std.AutoHashMap(u32, DbVector2).init(ctx.allocator);
var circleIter = try ctx.db.get("circle").iter();
defer circleIter.close();
while(try circleIter.next()) |circle| {
try circle_directions.put(circle.entity_id, circle.direction.scale(circle.speed));
}
var playerIter = try ctx.db.get("player").iter();
defer playerIter.close();
while(try playerIter.next()) |player| {
var circles = std.ArrayList(Circle).init(ctx.allocator);
var circlesIter1 = try ctx.db.get("circle").col("player_id")
.filter(.{ .player_id = player.player_id});
defer circlesIter1.close();
while(try circlesIter1.next()) |circle| {
try circles.append(circle);
}
var player_entities = std.ArrayList(Entity).init(ctx.allocator);
for(circles.items) |c| {
try player_entities.append((try ctx.db.get("entity").col("entity_id").find(.{ .entity_id = c.entity_id})).?);
}
if(player_entities.items.len <= 1) {
continue;
}
const count = player_entities.items.len;
// Gravitate circles towards other circles before they recombine
for(0..count) |i| {
const circle_i = circles.items[i];
const time_since_split = ctx.timestamp
.DurationSince(circle_i.last_split_time)
.as_f32(.Seconds);
const time_before_recombining = @max(SPLIT_RECOMBINE_DELAY_SEC - time_since_split, 0.0);
if(time_before_recombining > SPLIT_GRAV_PULL_BEFORE_RECOMBINE_SEC) {
continue;
}
const entity_i = player_entities.items[i];
for (player_entities.items) |entity_j| {
if(entity_i.entity_id == entity_j.entity_id) continue;
var diff = entity_i.position.sub(entity_j.position);
var distance_sqr = diff.sqr_magnitude();
if(distance_sqr <= 0.0001) {
diff = DbVector2{ .x = 1.0, .y = 0.0 };
distance_sqr = 1.0;
}
const radius_sum = mass_to_radius(entity_i.mass) + mass_to_radius(entity_j.mass);
if(distance_sqr > radius_sum * radius_sum) {
const gravity_multiplier =
1.0 - time_before_recombining / SPLIT_GRAV_PULL_BEFORE_RECOMBINE_SEC;
const vec = diff.normalized()
.scale(radius_sum - @sqrt(distance_sqr))
.scale(gravity_multiplier)
.scale(0.05)
.scale( 1.0 / @as(f32, @floatFromInt(count)));
circle_directions.getPtr(entity_i.entity_id).?.add_to(vec.scale( 1.0 / 2.0));
circle_directions.getPtr(entity_j.entity_id).?.sub_from(vec.scale( 1.0 / 2.0));
}
}
}
// Force circles apart
for(0..count) |i| {
const slice2 = player_entities.items[i+1..];
const entity_i = player_entities.items[i];
for (0..slice2.len) |j| {
const entity_j = slice2[j];
var diff = entity_i.position.sub(entity_j.position);
var distance_sqr = diff.sqr_magnitude();
if(distance_sqr <= 0.0001) {
diff = DbVector2{.x = 1.0, .y = 0.0};
distance_sqr = 1.0;
}
const radius_sum = mass_to_radius(entity_i.mass) + mass_to_radius(entity_j.mass);
const radius_sum_multiplied = radius_sum * ALLOWED_SPLIT_CIRCLE_OVERLAP_PCT;
if(distance_sqr < radius_sum_multiplied * radius_sum_multiplied) {
const vec = diff.normalized()
.scale(radius_sum - @sqrt(distance_sqr))
.scale(SELF_COLLISION_SPEED);
circle_directions.getPtr(entity_i.entity_id).?.add_to(vec.scale( 1.0 / 2.0));
circle_directions.getPtr(entity_j.entity_id).?.sub_from(vec.scale( 1.0 / 2.0));
}
}
}
}
var circleIter2 = try ctx.db.get("circle").iter();
defer circleIter2.close();
while(try circleIter2.next()) |circle| {
const circle_entity_n = (ctx.db.get("entity").col("entity_id").find(.{ .entity_id = circle.entity_id }) catch {
continue;
});
var circle_entity = circle_entity_n.?;
const circle_radius = mass_to_radius(circle_entity.mass);
const direction = circle_directions.get(circle.entity_id).?;
const new_pos = circle_entity.position.add(direction.scale(mass_to_max_move_speed(circle_entity.mass)));
const min = circle_radius;
const max = @as(f32, @floatFromInt(world_size)) - circle_radius;
if(max < min) continue;
circle_entity.position.x = std.math.clamp(new_pos.x, min, max);
circle_entity.position.y = std.math.clamp(new_pos.y, min, max);
try ctx.db.get("entity").col("entity_id").update(circle_entity);
}
// Check collisions
var entities = std.AutoHashMap(u32, Entity).init(ctx.allocator);
var entitiesIter = try ctx.db.get("entity").iter();
defer entitiesIter.close();
while(try entitiesIter.next()) |e| {
try entities.put(e.entity_id, e);
}
var circleIter3 = try ctx.db.get("circle").iter();
defer circleIter3.close();
while(try circleIter3.next()) |circle| {
// let span = spacetimedb::time_span::Span::start("collisions");
var circle_entity = entities.get(circle.entity_id).?;
var entityIter = entities.iterator();
while (entityIter.next()) |other_entity| {
if(other_entity.value_ptr.entity_id == circle_entity.entity_id) {
continue;
}
if(is_overlapping(&circle_entity, other_entity.value_ptr)) {
const other_circle_n = try ctx.db.get("circle").col("entity_id").find(.{ .entity_id = other_entity.value_ptr.entity_id });
if (other_circle_n) |other_circle| {
if(other_circle.player_id != circle.player_id) {
const mass_ratio = @as(f32, @floatFromInt(other_entity.value_ptr.mass)) / @as(f32, @floatFromInt(circle_entity.mass));
if(mass_ratio < MINIMUM_SAFE_MASS_RATIO) {
try schedule_consume_entity(
ctx,
circle_entity.entity_id,
other_entity.value_ptr.entity_id,
);
}
}
} else {
try schedule_consume_entity(ctx, circle_entity.entity_id, other_entity.value_ptr.entity_id);
}
}
}
// span.end();
}
}
fn schedule_consume_entity(ctx: *spacetime.ReducerContext, consumer_id: u32, consumed_id: u32) !void {
_ = try ctx.db.get("consume_entity_timer").insert(ConsumeEntityTimer{
.scheduled_id = 0,
.scheduled_at = .{ .Time = ctx.timestamp },
.consumer_entity_id = consumer_id,
.consumed_entity_id = consumed_id,
});
}
pub fn consume_entity(ctx: *spacetime.ReducerContext, request: ConsumeEntityTimer) !void {
const consumed_entity_n = try ctx
.db.get("entity").col("entity_id")
.find(.{ .entity_id = request.consumed_entity_id});
const consumer_entity_n = try ctx
.db.get("entity").col("entity_id")
.find(.{ .entity_id = request.consumer_entity_id});
if(consumed_entity_n == null) {
return;
}
if(consumer_entity_n == null) {
return;
}
const consumed_entity = consumed_entity_n.?;
var consumer_entity = consumer_entity_n.?;
consumer_entity.mass += consumed_entity.mass;
try destroy_entity(ctx, consumed_entity.entity_id);
try ctx.db.get("entity").col("entity_id").update(consumer_entity);
}
pub fn destroy_entity(ctx: *spacetime.ReducerContext, entity_id: u32) !void {
try ctx.db.get("food").col("entity_id").delete(.{ .entity_id = entity_id});
try ctx.db.get("circle").col("entity_id").delete(.{ .entity_id = entity_id});
try ctx.db.get("entity").col("entity_id").delete(.{ .entity_id = entity_id});
}
pub fn player_split(ctx: *spacetime.ReducerContext) !void {
const player = (try ctx
.db.get("player").col("identity")
.find(.{ .identity = ctx.sender})).?;
var circles = std.ArrayList(Circle).init(ctx.allocator);
var circlesIter = try ctx
.db
.get("circle")
.col("player_id")
.filter(.{ .player_id = player.player_id});
defer circlesIter.close();
while(try circlesIter.next()) |circle| {
try circles.append(circle);
}
var circle_count = circles.items.len;
if(circle_count >= MAX_CIRCLES_PER_PLAYER) {
return;
}
for(circles.items) |c| {
var circle = c;
var circle_entity = (try ctx
.db
.get("entity")
.col("entity_id")
.find(.{ .entity_id = circle.entity_id})).?;
if(circle_entity.mass >= MIN_MASS_TO_SPLIT * 2) {
const half_mass = @divTrunc(circle_entity.mass, 2);
_ = try spawn_circle_at(
ctx,
circle.player_id,
half_mass,
circle_entity.position.add(circle.direction),
ctx.timestamp,
);
circle_entity.mass -= half_mass;
circle.last_split_time = ctx.timestamp;
try ctx.db.get("circle").col("entity_id").update(circle);
try ctx.db.get("entity").col("entity_id").update(circle_entity);
circle_count += 1;
if (circle_count >= MAX_CIRCLES_PER_PLAYER) {
break;
}
}
}
_ = try ctx.db
.get("circle_recombine_timer")
.insert(CircleRecombineTimer {
.scheduled_id = 0,
.scheduled_at = spacetime.ScheduleAt.durationSecs(ctx, SPLIT_RECOMBINE_DELAY_SEC),
.player_id = player.player_id,
});
std.log.warn("Player split!", .{});
}
pub fn spawn_food(ctx: *spacetime.ReducerContext, _: SpawnFoodTimer) !void {
if(try ctx.db.get("player").count() == 0) {
//Are there no players yet?
return;
}
const world_size = (try ctx
.db
.get("config")
.col("id")
.find(.{ .id = 0})).?
.world_size;
var rng = ctx.rng;
var food_count = try ctx.db.get("food").count();
while (food_count < TARGET_FOOD_COUNT) {
const food_mass = gen_range(&rng, FOOD_MASS_MIN, FOOD_MASS_MAX);
const food_radius = mass_to_radius(@intFromFloat(food_mass));
const x = gen_range(&rng, food_radius, @as(f32, @floatFromInt(world_size)) - food_radius);
const y = gen_range(&rng, food_radius, @as(f32, @floatFromInt(world_size)) - food_radius);
const entity = try ctx.db.get("entity").insert(Entity {
.entity_id = 0,
.position = DbVector2{ .x = x, .y = y },
.mass = @intFromFloat(food_mass),
});
_ = try ctx.db.get("food").insert(Food {
.entity_id = entity.entity_id,
});
food_count += 1;
std.log.info("Spawned food! {}", .{entity.entity_id});
}
}
pub fn circle_decay(ctx: *spacetime.ReducerContext, _: CircleDecayTimer) !void {
var circleIter = try ctx.db.get("circle").iter();
defer circleIter.close();
while(try circleIter.next()) |circle| {
var circle_entity = (try ctx
.db
.get("entity")
.col("entity_id")
.find(.{ .entity_id = circle.entity_id})).?;
if(circle_entity.mass <= START_PLAYER_MASS) {
continue;
}
circle_entity.mass = @intFromFloat((@as(f32, @floatFromInt(circle_entity.mass)) * 0.99));
try ctx.db.get("entity").col("entity_id").update(circle_entity);
}
}
pub fn calculate_center_of_mass(entities: []const Entity) DbVector2 {
const total_mass: u32 = blk: {
var sum: u32 = 0;
for(entities) |entity| {
sum += entity.mass;
}
break :blk sum;
};
const center_of_mass: DbVector2 = blk: {
var sum: DbVector2 = 0;
for(entities) |entity| {
sum.x += entity.position.x * @as(f32, @floatFromInt(entity.mass));
sum.y += entity.position.y * @as(f32, @floatFromInt(entity.mass));
}
break :blk sum;
};
return center_of_mass / @as(f32, @floatFromInt(total_mass));
}
pub fn circle_recombine(ctx: *spacetime.ReducerContext, timer: CircleRecombineTimer) !void {
var circles = std.ArrayList(Circle).init(ctx.allocator);
var circlesIter = try ctx
.db
.get("circle")
.col("player_id")
.filter(.{ .player_id = timer.player_id });
defer circlesIter.close();
while(try circlesIter.next()) |circle| {
try circles.append(circle);
}
var recombining_entities = std.ArrayList(Entity).init(ctx.allocator);
for(circles.items) |circle| {
if(@as(f32, @floatFromInt(ctx.timestamp.__timestamp_micros_since_unix_epoch__ - circle.last_split_time.__timestamp_micros_since_unix_epoch__)) >= SPLIT_RECOMBINE_DELAY_SEC) {
const entity = (try ctx.db
.get("entity").col("entity_id")
.find(.{ .entity_id = circle.entity_id })).?;
try recombining_entities.append(entity);
}
}
if(recombining_entities.items.len <= 1) {
return; //No circles to recombine
}
const base_entity_id = recombining_entities.items[0].entity_id;
for(1..recombining_entities.items.len) |i| {
try schedule_consume_entity(ctx, base_entity_id, recombining_entities.items[i].entity_id);
}
}

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
const std = @import("std"); const std = @import("std");
const utils = @import("spacetime/utils.zig"); const utils = @import("spacetime/utils.zig");
@ -64,7 +66,13 @@ pub fn logFn(comptime level: std.log.Level, comptime _: @TypeOf(.enum_literal),
pub const BytesSink = extern struct { inner: u32 }; pub const BytesSink = extern struct { inner: u32 };
pub const BytesSource = extern struct { inner: u32 }; pub const BytesSource = extern struct { inner: u32 };
pub const TableId = extern struct { _inner: u32, }; pub const TableId = extern struct { _inner: u32, };
pub const RowIter = extern struct { _inner: u32, pub const INVALID = RowIter{ ._inner = 0}; }; pub const RowIter = extern struct {
_inner: u32,
pub const INVALID = RowIter{ ._inner = 0};
pub fn invalid(self: @This()) bool {
return self._inner == 0;
}
};
pub const IndexId = extern struct{ _inner: u32 }; pub const IndexId = extern struct{ _inner: u32 };
pub const ColId = extern struct { _inner: u16 }; pub const ColId = extern struct { _inner: u16 };
@ -73,16 +81,64 @@ pub const Identity = struct {
}; };
pub const Timestamp = struct { pub const Timestamp = struct {
__timestamp_micros_since_unix_epoch__: i64 __timestamp_micros_since_unix_epoch__: i64,
pub fn DurationSince(self: @This(), other: @This()) TimeDuration {
return .{
.__time_duration_micros__ = other.__timestamp_micros_since_unix_epoch__ - self.__timestamp_micros_since_unix_epoch__,
};
}
};
pub const TimeUnit = enum {
Minutes,
Seconds,
Milliseconds,
Microseconds,
}; };
pub const TimeDuration = struct { pub const TimeDuration = struct {
__time_duration_micros__: i64 __time_duration_micros__: i64,
pub fn as_f32(self: @This(), unit: TimeUnit) f32 {
const micros: f32 = @floatFromInt(self.__time_duration_micros__);
return switch(unit) {
.Minutes => micros / std.time.us_per_min,
.Seconds => micros / std.time.us_per_s,
.Milliseconds => micros / std.time.us_per_ms,
.Microseconds => micros,
};
}
pub fn create(time: f32, unit: TimeUnit) TimeDuration {
return switch(unit) {
.Minutes => .{ .__time_duration_micros__ = time * std.time.us_per_min},
.Seconds => .{ .__time_duration_micros__ = time * std.time.us_per_s},
.Milliseconds => .{ .__time_duration_micros__ = time * std.time.us_per_ms},
.Microseconds => .{ .__time_duration_micros__ = time }
};
}
}; };
pub const ScheduleAt = union(enum){ pub const ScheduleAt = union(enum){
Interval: TimeDuration, Interval: TimeDuration,
Time: Timestamp, Time: Timestamp,
pub fn durationSecs(ctx: *ReducerContext, secs: f32) ScheduleAt {
return .{
.Time = .{
.__timestamp_micros_since_unix_epoch__ =
ctx.timestamp.__timestamp_micros_since_unix_epoch__ +
@as(i64, @intFromFloat(secs * std.time.us_per_s)),
}
};
}
pub fn interval(time: f32, unit: TimeUnit) ScheduleAt {
return .{
.Interval = TimeDuration.create(time, unit),
};
}
}; };
pub const ConnectionId = struct { pub const ConnectionId = struct {
@ -95,6 +151,7 @@ pub const SpacetimeValue = enum(u1) {
}; };
pub const SpacetimeError = error { pub const SpacetimeError = error {
UNKNOWN,
HOST_CALL_FAILURE, HOST_CALL_FAILURE,
NOT_IN_TRANSACTION, NOT_IN_TRANSACTION,
BSATN_DECODE_ERROR, BSATN_DECODE_ERROR,
@ -124,6 +181,9 @@ pub extern "spacetime_10.0" fn datastore_index_scan_range_bsatn( index_id: Index
pub extern "spacetime_10.0" fn row_iter_bsatn_close(iter: RowIter) u16; pub extern "spacetime_10.0" fn row_iter_bsatn_close(iter: RowIter) u16;
pub extern "spacetime_10.0" fn datastore_delete_by_index_scan_range_bsatn(index_id: IndexId, prefix_ptr: [*c]const u8, prefix_len: usize, prefix_elems: ColId, rstart_ptr: [*c]const u8, rstart_len: usize, rend_ptr: [*c]const u8, rend_len: usize, out: [*c]u32) u16; pub extern "spacetime_10.0" fn datastore_delete_by_index_scan_range_bsatn(index_id: IndexId, prefix_ptr: [*c]const u8, prefix_len: usize, prefix_elems: ColId, rstart_ptr: [*c]const u8, rstart_len: usize, rend_ptr: [*c]const u8, rend_len: usize, out: [*c]u32) u16;
pub extern "spacetime_10.0" fn datastore_update_bsatn(table_id: TableId, index_id: IndexId, row_ptr: [*c]u8, row_len_ptr: [*c]usize) u16;
pub extern "spacetime_10.0" fn datastore_table_row_count(table_id: TableId, out: [*c]u64) u16;
pub fn retMap(errVal: i17) !SpacetimeValue { pub fn retMap(errVal: i17) !SpacetimeValue {
return switch(errVal) { return switch(errVal) {
@ -217,7 +277,7 @@ pub fn readArg(allocator: std.mem.Allocator, args: BytesSource, comptime t: type
const string_buf = try allocator.alloc(u8, len); const string_buf = try allocator.alloc(u8, len);
return try read_bytes_source(args, string_buf); return try read_bytes_source(args, string_buf);
}, },
i8, u8, i16, u16, i32, u32, bool, i8, u8, i16, u16, i32, u32,
i64, u64, i128, u128, i256, u256, i64, u64, i128, u128, i256, u256,
f32, f64 => { f32, f64 => {
const read_type = t; const read_type = t;
@ -239,15 +299,14 @@ pub fn readArg(allocator: std.mem.Allocator, args: BytesSource, comptime t: type
const tagType = std.meta.Tag(t); const tagType = std.meta.Tag(t);
const intType = u8; const intType = u8;
const tag: tagType = @enumFromInt(try readArg(allocator, args, intType)); const tag: tagType = @enumFromInt(try readArg(allocator, args, intType));
var temp: t = undefined;//@unionInit(t, @tagName(tag), undefined);
switch(tag) { switch(tag) {
inline else => |tag_field| { inline else => |tag_field| {
var temp: t = @unionInit(t, @tagName(tag_field), undefined);
const field = std.meta.fields(t)[@intFromEnum(tag_field)]; const field = std.meta.fields(t)[@intFromEnum(tag_field)];
@field(temp, field.name) = (try readArg(allocator, args, field.type)); @field(temp, field.name) = (try readArg(allocator, args, field.type));
}
}
//@field(temp, field.name) = try readArg(allocator, args, @TypeOf(field));
return temp; return temp;
}
}
}, },
else => { else => {
@compileLog(t); @compileLog(t);
@ -262,6 +321,7 @@ pub fn zigTypeToSpacetimeType(comptime param: ?type) AlgebraicType {
if(param == null) @compileError("Null parameter type passed to zigParamsToSpacetimeParams"); if(param == null) @compileError("Null parameter type passed to zigParamsToSpacetimeParams");
return switch(param.?) { return switch(param.?) {
[]const u8 => .{ .String = {} }, []const u8 => .{ .String = {} },
bool => .{ .Bool = {}, },
i32 => .{ .I32 = {}, }, i32 => .{ .I32 = {}, },
i64 => .{ .I64 = {}, }, i64 => .{ .I64 = {}, },
i128 => .{ .I128 = {}, }, i128 => .{ .I128 = {}, },
@ -319,8 +379,8 @@ const StructImpl = struct {
fields: []const StructFieldImpl, fields: []const StructFieldImpl,
}; };
pub fn addStructImpl(structImpls: *[]const StructImpl, layout: anytype, name_override: ?[]const u8) u32 { pub fn addStructImpl(comptime structImpls: *[]const StructImpl, layout: anytype) u32 {
const name = name_override orelse blk: { const name = blk: {
var temp: []const u8 = @typeName(layout); var temp: []const u8 = @typeName(layout);
if(std.mem.lastIndexOf(u8, temp, ".")) |idx| if(std.mem.lastIndexOf(u8, temp, ".")) |idx|
temp = temp[idx+1..]; temp = temp[idx+1..];
@ -329,6 +389,7 @@ pub fn addStructImpl(structImpls: *[]const StructImpl, layout: anytype, name_ove
//FIXME: Search for existing structImpl of provided layout. I think the current might work, but I don't trust it. //FIXME: Search for existing structImpl of provided layout. I think the current might work, but I don't trust it.
inline for(structImpls.*, 0..) |structImpl, i| { inline for(structImpls.*, 0..) |structImpl, i| {
@setEvalBranchQuota(structImpl.name.len * 100);
if(std.mem.eql(u8, structImpl.name, name)) { if(std.mem.eql(u8, structImpl.name, name)) {
return i; return i;
} }
@ -343,7 +404,7 @@ pub fn addStructImpl(structImpls: *[]const StructImpl, layout: anytype, name_ove
.name = field.name, .name = field.name,
.type = .{ .type = .{
.Ref = .{ .Ref = .{
.inner = addStructImpl(structImpls, field.type, null), .inner = addStructImpl(structImpls, field.type),
} }
} }
} }
@ -398,6 +459,7 @@ pub fn getStructImplOrType(structImpls: []const StructImpl, layout: type) Algebr
break :blk temp; break :blk temp;
}; };
@setEvalBranchQuota(structImpls.len * 100);
inline for(structImpls, 0..) |structImpl, i| { inline for(structImpls, 0..) |structImpl, i| {
if(std.mem.eql(u8, structImpl.name, name)) { if(std.mem.eql(u8, structImpl.name, name)) {
return .{ return .{
@ -411,179 +473,7 @@ pub fn getStructImplOrType(structImpls: []const StructImpl, layout: type) Algebr
return zigTypeToSpacetimeType(layout); return zigTypeToSpacetimeType(layout);
} }
pub fn compile(comptime moduleTables : []const Table, comptime moduleReducers : []const Reducer) !RawModuleDefV9 { pub fn callReducer(comptime mdef: []const SpecReducer, comptime id: usize, args: anytype) ReducerError!void {
var def : RawModuleDefV9 = undefined;
_ = &def;
var tableDefs: []const RawTableDefV9 = &[_]RawTableDefV9{};
var reducerDefs: []const RawReducerDefV9 = &[_]RawReducerDefV9{};
var raw_types: []const AlgebraicType = &[_]AlgebraicType{};
var types: []const RawTypeDefV9 = &[_]RawTypeDefV9{};
var structDecls: []const StructImpl = &[_]StructImpl{};
inline for(moduleTables) |table| {
const table_name: []const u8 = table.name.?;
const table_type: TableType = table.type;
const table_access: TableAccess = table.access;
const product_type_ref: AlgebraicTypeRef = AlgebraicTypeRef{
.inner = addStructImpl(&structDecls, table.schema, table.schema_name),
};
const primary_key: []const u16 = blk: {
if(table.primary_key) |key| {
break :blk &[_]u16{ std.meta.fieldIndex(table.schema, key).?, };
}
break :blk &[_]u16{};
};
var indexes: []const RawIndexDefV9 = &[_]RawIndexDefV9{};
if(table.primary_key) |key| {
indexes = indexes ++ &[_]RawIndexDefV9{
RawIndexDefV9{
.name = null,
.accessor_name = key,
.algorithm = .{
.BTree = &.{ 0 }
}
}
};
}
if(table.indexes) |_indexes| {
inline for(_indexes) |index| {
const fieldIndex = std.meta.fieldIndex(table.schema, index.name).?;
const indexAlgo: RawIndexAlgorithm = blk: {
switch(index.layout) {
.BTree => break :blk .{ .BTree = &.{ fieldIndex } },
.Hash => break :blk .{ .Hash = &.{ fieldIndex } },
.Direct => break :blk .{ .Direct = fieldIndex },
}
};
indexes = indexes ++ &[_]RawIndexDefV9{
RawIndexDefV9{
.name = null,
.accessor_name = index.name,
.algorithm = indexAlgo
}
};
}
}
var constraints: []const RawConstraintDefV9 = &[_]RawConstraintDefV9{};
if(table.primary_key) |_| {
constraints = constraints ++ &[_]RawConstraintDefV9{
RawConstraintDefV9{
.name = null,
.data = .{ .unique = .{ .Columns = &.{ primary_key[0] } } },
}
};
}
const schedule: ?RawScheduleDefV9 = schedule_blk: {
if(table.schedule_reducer == null) break :schedule_blk null;
const column = column_blk: for(std.meta.fields(table.schema), 0..) |field, i| {
if(field.type == ScheduleAt) break :column_blk i;
};
const resolvedReducer = blk: for(moduleReducers) |reducer| {
if(reducer.func == table.schedule_reducer.?.func)
break :blk reducer;
};
break :schedule_blk RawScheduleDefV9{
.name = table_name ++ "_sched",
.reducer_name = resolvedReducer.name.?,
.scheduled_at_column = column,
};
};
tableDefs = tableDefs ++ &[_]RawTableDefV9{
.{
.name = table_name,
.product_type_ref = product_type_ref,
.primary_key = primary_key,
.indexes = indexes,
.constraints = constraints,
.sequences = &[_]RawSequenceDefV9{},
.schedule = schedule,
.table_type = table_type,
.table_access = table_access,
}
};
}
inline for(structDecls) |structDecl| {
var product_elements: []const ProductTypeElement = &[_]ProductTypeElement{};
inline for(structDecl.fields) |field|
{
product_elements = product_elements ++ &[_]ProductTypeElement{
.{
.name = field.name,
.algebraic_type = field.type,
}
};
}
raw_types = raw_types ++ &[_]AlgebraicType{
.{
.Product = .{
.elements = product_elements,
}
},
};
types = types ++ &[_]RawTypeDefV9{
.{
.name = .{
.scope = &[_][]u8{},
.name = structDecl.name
},
.ty = .{ .inner = raw_types.len-1, },
.custom_ordering = true,
}
};
}
inline for(moduleReducers) |reducer| {
const name: []const u8 = reducer.name.?;
const lifecycle: Lifecycle = reducer.lifecycle;
var params: []const ProductTypeElement = &[_]ProductTypeElement{};
const param_names = reducer.params;
for(@typeInfo(reducer.func_type).@"fn".params[1..], param_names) |param, param_name| {
params = params ++ &[_]ProductTypeElement{
.{
.name = param_name,
.algebraic_type = getStructImplOrType(structDecls, param.type.?),
}
};
}
reducerDefs = reducerDefs ++ &[_]RawReducerDefV9{
.{
.name = name,
.params = .{ .elements = params },
.lifecycle = lifecycle,
},
};
}
return .{
.typespace = .{
.types = raw_types,
},
.tables = tableDefs,
.reducers = reducerDefs,
.types = types,
.misc_exports = &[_]RawMiscModuleExportV9{},
.row_level_security = &[_]RawRowLevelSecurityDefV9{},
};
}
pub fn callReducer(comptime mdef: []const Reducer, id: usize, args: anytype) ReducerError!void {
inline for(mdef, 0..) |field, i| { inline for(mdef, 0..) |field, i| {
if(id == i) { if(id == i) {
const func = field.func_type; const func = field.func_type;
@ -592,7 +482,7 @@ pub fn callReducer(comptime mdef: []const Reducer, id: usize, args: anytype) Red
return @call(.auto, func_val, args); return @call(.auto, func_val, args);
} }
const name: []const u8 = field.name.?; const name: []const u8 = field.name;
std.log.err("invalid number of args passed to {s}, expected {} got {}", .{name, @typeInfo(func).@"fn".params.len, std.meta.fields(@TypeOf(args)).len}); std.log.err("invalid number of args passed to {s}, expected {} got {}", .{name, @typeInfo(func).@"fn".params.len, std.meta.fields(@TypeOf(args)).len});
@panic("invalid number of args passed to func"); @panic("invalid number of args passed to func");
} }
@ -650,6 +540,19 @@ pub fn PrintModule(data: anytype) void {
PrintModule(data.scope); PrintModule(data.scope);
PrintModule(data.name); PrintModule(data.name);
}, },
[]const RawReducerDefV9 => {
for(data) |elem| {
PrintModule(elem);
}
},
RawReducerDefV9 => {
PrintModule(data.lifecycle);
PrintModule(data.name);
PrintModule(data.params);
},
Lifecycle => {
std.log.debug("\"{any}\"", .{data});
},
[][]const u8 => { [][]const u8 => {
for(data) |elem| { for(data) |elem| {
PrintModule(elem); PrintModule(elem);
@ -672,8 +575,8 @@ pub const Param = struct {
name: []const u8, name: []const u8,
}; };
pub const Reducer = struct { pub const SpecReducer = struct {
name: ?[]const u8 = null, name: []const u8,
lifecycle: Lifecycle = .None, lifecycle: Lifecycle = .None,
params: []const [:0]const u8 = &.{}, params: []const [:0]const u8 = &.{},
param_types: ?[]type = null, param_types: ?[]type = null,
@ -681,67 +584,284 @@ pub const Reducer = struct {
func: *const fn()void, func: *const fn()void,
}; };
pub fn Reducer(data: anytype) SpecReducer {
return .{
.name = data.name,
.lifecycle = if(@hasField(@TypeOf(data), "lifecycle")) data.lifecycle else .None,
.params = if(@hasField(@TypeOf(data), "params")) data.params else &.{},
.func = @ptrCast(data.func),
.func_type = @TypeOf(data.func.*)
};
}
pub const Index = struct { pub const Index = struct {
name: []const u8, name: []const u8,
layout: std.meta.Tag(RawIndexAlgorithm), layout: std.meta.Tag(RawIndexAlgorithm),
}; };
pub const Table = struct { pub const TableAttribs = struct {
name: ?[]const u8 = null,
schema: type,
schema_name: []const u8,
type: TableType = .User, type: TableType = .User,
access: TableAccess = .Private, access: TableAccess = .Private,
primary_key: ?[]const u8 = null, primary_key: ?[]const u8 = null,
schedule_reducer: ?*const Reducer = null, schedule: ?[]const u8 = null,
indexes: ?[]const Index = null, indexes: ?[]const Index = null,
unique: ?[]const []const u8 = null, unique: ?[]const []const u8 = null,
autoinc: ?[]const []const u8 = null, autoinc: ?[]const [:0]const u8 = null,
}; };
pub const reducers: []const Reducer = blk: { pub const Table = struct {
var temp: []const Reducer = &.{}; name: []const u8,
const root = @import("root"); schema: type,
for(@typeInfo(root).@"struct".decls) |decl| { attribs: TableAttribs = .{},
const field = @field(root, decl.name); };
if(@TypeOf(@field(root, decl.name)) == Reducer) {
temp = temp ++ &[_]Reducer{ pub const Spec = struct {
Reducer{ tables: []const Table,
.name = field.name orelse decl.name, reducers: []const SpecReducer,
.lifecycle = field.lifecycle, row_level_security: []const []const u8,
.params = field.params, includes: []const Spec = &.{},
.func = field.func,
.func_type = field.func_type, pub fn getAllTable(self: @This()) []const Table {
var tables: []const Table = self.tables;
for(self.includes) |include| {
tables = tables ++ include.getAllTable();
}
return tables;
}
pub fn getAllReducers(self: @This()) []const SpecReducer {
var reducers: []const SpecReducer = self.reducers;
for(self.includes) |include| {
reducers = reducers ++ include.getAllReducers();
}
return reducers;
}
pub fn getAllRLS(self: @This()) []const []const u8 {
var row_level_security: []const []const u8 = self.row_level_security;
for(self.includes) |include| {
row_level_security = row_level_security ++ include.getAllRLS();
}
return row_level_security;
}
};
pub fn SpecBuilder(comptime spec: Spec) RawModuleDefV9 {
comptime {
//var moduleDef: RawModuleDefV9 = undefined;
var tableDefs: []const RawTableDefV9 = &[_]RawTableDefV9{};
var reducerDefs: []const RawReducerDefV9 = &[_]RawReducerDefV9{};
var raw_types: []const AlgebraicType = &[_]AlgebraicType{};
var types: []const RawTypeDefV9 = &[_]RawTypeDefV9{};
var row_level_security: []const RawRowLevelSecurityDefV9 = &[_]RawRowLevelSecurityDefV9{};
var structDecls: []const StructImpl = &[_]StructImpl{};
for(spec.getAllTable()) |table| {
const table_name: []const u8 = table.name;
const table_type: TableType = table.attribs.type;
const table_access: TableAccess = table.attribs.access;
const product_type_ref: AlgebraicTypeRef = AlgebraicTypeRef{
.inner = addStructImpl(&structDecls, table.schema),
};
const primary_key: []const u16 = blk: {
if(table.attribs.primary_key) |key| {
const fieldIdx = std.meta.fieldIndex(table.schema, key);
if(fieldIdx == null) {
@compileLog(table.schema, key);
@compileError("Primary Key `" ++ table_name ++ "." ++ key ++ "` does not exist in table schema `"++@typeName(table.schema)++"`!");
}
break :blk &[_]u16{ fieldIdx.?, };
}
break :blk &[_]u16{};
};
var indexes: []const RawIndexDefV9 = &[_]RawIndexDefV9{};
if(table.attribs.primary_key) |key| {
indexes = indexes ++ &[_]RawIndexDefV9{
RawIndexDefV9{
.name = null,
.accessor_name = key,
.algorithm = .{
.BTree = &.{ 0 }
}
}
};
}
if(table.attribs.indexes) |_indexes| {
for(_indexes) |index| {
const fieldIndex = std.meta.fieldIndex(table.schema, index.name).?;
const indexAlgo: RawIndexAlgorithm = blk: {
switch(index.layout) {
.BTree => break :blk .{ .BTree = &.{ fieldIndex } },
.Hash => break :blk .{ .Hash = &.{ fieldIndex } },
.Direct => break :blk .{ .Direct = fieldIndex },
}
};
indexes = indexes ++ &[_]RawIndexDefV9{
RawIndexDefV9{
.name = null,
.accessor_name = index.name,
.algorithm = indexAlgo
} }
}; };
} }
} }
break :blk temp;
};
pub const tables: []const Table = blk: { var constraints: []const RawConstraintDefV9 = &[_]RawConstraintDefV9{};
var temp: []const Table = &.{}; if(table.attribs.primary_key) |_| {
const root = @import("root"); constraints = constraints ++ &[_]RawConstraintDefV9{
for(@typeInfo(root).@"struct".decls) |decl| { RawConstraintDefV9{
const field = @field(root, decl.name); .name = null,
if(@TypeOf(@field(root, decl.name)) == Table) { .data = .{ .unique = .{ .Columns = &.{ primary_key[0] } } },
temp = temp ++ &[_]Table{ }
Table{ };
.type = field.type, }
.access = field.access,
.schema = field.schema, const schedule: ?RawScheduleDefV9 = schedule_blk: {
.schema_name = field.schema_name, if(table.attribs.schedule == null) break :schedule_blk null;
.name = field.name orelse decl.name, const column = column_blk: for(std.meta.fields(table.schema), 0..) |field, i| {
.primary_key = field.primary_key, if(field.type == ScheduleAt) break :column_blk i;
.schedule_reducer = field.schedule_reducer, };
.indexes = field.indexes, const resolvedReducer = blk: {
.autoinc = field.autoinc, for(spec.reducers) |reducer| {
.unique = field.unique, if(std.mem.eql(u8, reducer.name, table.attribs.schedule.?))
break :blk reducer;
}
@compileError("Reducer of name `"++table.attribs.schedule.?++"` does not exist!");
};
break :schedule_blk RawScheduleDefV9{
.name = table_name ++ "_sched",
.reducer_name = resolvedReducer.name,
.scheduled_at_column = column,
};
};
var sequences: []const RawSequenceDefV9 = &[_]RawSequenceDefV9{};
if(table.attribs.autoinc) |autoincs| {
for(autoincs) |autoinc| {
sequences = sequences ++ &[_]RawSequenceDefV9{
RawSequenceDefV9{
.name = table_name ++ "_" ++ autoinc ++ "_seq",
.column = std.meta.fieldIndex(table.schema, autoinc).?,
.start = null,
.min_value = null,
.max_value = null,
.increment = 1,
} }
}; };
} }
} }
break :blk temp;
tableDefs = tableDefs ++ &[1]RawTableDefV9{
.{
.name = table_name,
.product_type_ref = product_type_ref,
.primary_key = primary_key,
.indexes = indexes,
.constraints = constraints,
.sequences = sequences,
.schedule = schedule,
.table_type = table_type,
.table_access = table_access,
}
};
}
@setEvalBranchQuota(structDecls.len * 100);
for(structDecls) |structDecl| {
var product_elements: []const ProductTypeElement = &[_]ProductTypeElement{};
for(structDecl.fields) |field|
{
product_elements = product_elements ++ &[_]ProductTypeElement{
.{
.name = field.name,
.algebraic_type = field.type,
}
};
}
raw_types = raw_types ++ &[_]AlgebraicType{
.{
.Product = .{
.elements = product_elements,
}
},
};
types = types ++ &[_]RawTypeDefV9{
.{
.name = .{
.scope = &[_][]u8{},
.name = structDecl.name
},
.ty = .{ .inner = raw_types.len-1, },
.custom_ordering = true,
}
};
}
for(spec.getAllReducers()) |reducer| {
const name: []const u8 = reducer.name;
const lifecycle: Lifecycle = reducer.lifecycle;
var params: []const ProductTypeElement = &[_]ProductTypeElement{};
const param_names = reducer.params;
for(@typeInfo(reducer.func_type).@"fn".params[1..], param_names) |param, param_name| {
params = params ++ &[_]ProductTypeElement{
.{
.name = param_name,
.algebraic_type = getStructImplOrType(structDecls, param.type.?),
}
};
}
reducerDefs = reducerDefs ++ &[_]RawReducerDefV9{
.{
.name = name,
.params = .{ .elements = params },
.lifecycle = lifecycle,
},
};
}
for(spec.row_level_security) |rls| {
row_level_security = row_level_security ++ &[_]RawRowLevelSecurityDefV9{
RawRowLevelSecurityDefV9{
.sql = rls,
}
};
}
return .{
.typespace = .{
.types = raw_types,
},
.tables = tableDefs,
.reducers = reducerDefs,
.types = types,
.misc_exports = &[_]RawMiscModuleExportV9{},
.row_level_security = row_level_security,
};
}
}
pub const globalSpec: Spec = blk: {
const root = @import("root");
for(@typeInfo(root).@"struct".decls) |decl| {
const field = @field(root, decl.name);
if(@TypeOf(field) == Spec) {
break :blk field;
}
}
@compileError("No spacetime spec found in root file!");
}; };
pub export fn __describe_module__(description: BytesSink) void { pub export fn __describe_module__(description: BytesSink) void {
@ -751,15 +871,9 @@ pub export fn __describe_module__(description: BytesSink) void {
var moduleDefBytes = std.ArrayList(u8).init(allocator); var moduleDefBytes = std.ArrayList(u8).init(allocator);
defer moduleDefBytes.deinit(); defer moduleDefBytes.deinit();
const compiledModule = comptime compile(tables, reducers) catch |err| { const compiledModule = comptime SpecBuilder(globalSpec);
var buf: [1024]u8 = undefined;
const fmterr = std.fmt.bufPrint(&buf, "Error: {}", .{err}) catch {
@compileError("ERROR2: No Space Left! Expand error buffer size!");
};
@compileError(fmterr);
};
PrintModule(compiledModule); //PrintModule(compiledModule);
serialize_module(&moduleDefBytes, compiledModule) catch { serialize_module(&moduleDefBytes, compiledModule) catch {
std.log.err("Allocator Error: Cannot continue!", .{}); std.log.err("Allocator Error: Cannot continue!", .{});
@ -783,17 +897,34 @@ pub export fn __call_reducer__(
) i16 { ) i16 {
_ = err; _ = err;
const allocator = std.heap.wasm_allocator; const backend_allocator = std.heap.wasm_allocator;
var arena_allocator = std.heap.ArenaAllocator.init(backend_allocator);
defer arena_allocator.deinit();
const allocator = arena_allocator.allocator();
var ctx: ReducerContext = .{ var ctx: ReducerContext = .{
.allocator = allocator,
.sender = std.mem.bytesAsValue(Identity, std.mem.sliceAsBytes(&[_]u64{ sender_0, sender_1, sender_2, sender_3})).*, .sender = std.mem.bytesAsValue(Identity, std.mem.sliceAsBytes(&[_]u64{ sender_0, sender_1, sender_2, sender_3})).*,
.timestamp = Timestamp{ .__timestamp_micros_since_unix_epoch__ = @intCast(timestamp), }, .timestamp = Timestamp{ .__timestamp_micros_since_unix_epoch__ = @intCast(timestamp), },
.connection_id = std.mem.bytesAsValue(ConnectionId, std.mem.sliceAsBytes(&[_]u64{ conn_id_0, conn_id_1})).*, .connection_id = std.mem.bytesAsValue(ConnectionId, std.mem.sliceAsBytes(&[_]u64{ conn_id_0, conn_id_1})).*,
.db = .{ .db = .{
.allocator = allocator, .allocator = backend_allocator,
.frame_allocator = allocator,
}, },
}; };
const spec: Spec = blk: {
const root = @import("root");
inline for(@typeInfo(root).@"struct".decls) |decl| {
const field = @field(root, decl.name);
if(@TypeOf(field) == Spec) {
break :blk field;
}
}
};
const reducers = spec.reducers;
inline for(reducers, 0..) |reducer, i| { inline for(reducers, 0..) |reducer, i| {
if(id == i) { if(id == i) {
const func = reducer.func_type; const func = reducer.func_type;
@ -803,7 +934,7 @@ pub export fn __call_reducer__(
comptime var argList: []const std.builtin.Type.StructField = &[_]std.builtin.Type.StructField{ comptime var argList: []const std.builtin.Type.StructField = &[_]std.builtin.Type.StructField{
std.builtin.Type.StructField{ std.builtin.Type.StructField{
.alignment = @alignOf(*ReducerContext), .alignment = @alignOf(*ReducerContext),
.default_value = null, .default_value_ptr = null,
.is_comptime = false, .is_comptime = false,
.name = "0", .name = "0",
.type = *ReducerContext, .type = *ReducerContext,
@ -815,7 +946,7 @@ pub export fn __call_reducer__(
argList = argList ++ &[_]std.builtin.Type.StructField{ argList = argList ++ &[_]std.builtin.Type.StructField{
std.builtin.Type.StructField{ std.builtin.Type.StructField{
.alignment = @alignOf(param.type.?), .alignment = @alignOf(param.type.?),
.default_value = null, .default_value_ptr = null,
.is_comptime = false, .is_comptime = false,
.name = comptime utils.itoa(argCount), .name = comptime utils.itoa(argCount),
.type = param.type.?, .type = param.type.?,
@ -849,7 +980,7 @@ pub export fn __call_reducer__(
} }
callReducer(reducers, i, constructedArg) catch |errRet| { callReducer(reducers, i, constructedArg) catch |errRet| {
std.log.debug("{s}", .{@errorName(errRet)}); std.log.err("{s}", .{@errorName(errRet)});
if (@errorReturnTrace()) |trace| { if (@errorReturnTrace()) |trace| {
std.debug.dumpStackTrace(trace.*); std.debug.dumpStackTrace(trace.*);
} }

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
const spacetime = @import("../spacetime.zig"); const spacetime = @import("../spacetime.zig");
const BytesSink = spacetime.BytesSink; const BytesSink = spacetime.BytesSink;

112
src/spacetime/math.zig Normal file
View file

@ -0,0 +1,112 @@
pub const DbVector3 = struct {
x: f32,
y: f32,
z: f32,
pub fn sqr_magnitude(self: @This()) f32 {
return self.x * self.x + self.y * self.y + self.z * self.z;
}
pub fn magnitude(self: @This()) f32 {
return @sqrt(self.sqr_magnitude());
}
pub fn normalized(self: @This()) DbVector3 {
const length = self.magnitude();
return .{
.x = self.x / length,
.y = self.y / length,
.z = self.z / length,
};
}
pub fn scale(self: @This(), val: f32) DbVector3 {
return .{
.x = self.x * val,
.y = self.y * val,
.z = self.z * val,
};
}
pub fn add(self: @This(), other: DbVector3) DbVector3 {
return .{
.x = self.x + other.x,
.y = self.y + other.y,
.z = self.z + other.z,
};
}
pub fn add_to(self: *@This(), other: DbVector3) void {
self.x += other.x;
self.y += other.y;
self.z += other.z;
}
pub fn sub(self: @This(), other: DbVector3) DbVector3 {
return .{
.x = self.x - other.x,
.y = self.y - other.y,
.z = self.z - other.z,
};
}
pub fn sub_from(self: *@This(), other: DbVector3) void {
self.x -= other.x;
self.y -= other.y;
self.z -= other.z;
}
};
pub const DbVector2 = struct {
x: f32,
y: f32,
pub fn sqr_magnitude(self: @This()) f32 {
return self.x * self.x + self.y * self.y;
}
pub fn magnitude(self: @This()) f32 {
return @sqrt(self.sqr_magnitude());
}
pub fn normalized(self: @This()) DbVector2 {
const length = self.magnitude();
return .{
.x = self.x / length,
.y = self.y / length,
};
}
pub fn scale(self: @This(), val: f32) DbVector2 {
return .{
.x = self.x * val,
.y = self.y * val,
};
}
pub fn add(self: @This(), other: DbVector2) DbVector2 {
return .{
.x = self.x + other.x,
.y = self.y + other.y,
};
}
pub fn add_to(self: *@This(), other: DbVector2) void {
self.x += other.x;
self.y += other.y;
}
pub fn sub(self: @This(), other: DbVector2) DbVector2 {
return .{
.x = self.x - other.x,
.y = self.y - other.y,
};
}
pub fn sub_from(self: *@This(), other: DbVector2) void {
self.x -= other.x;
self.y -= other.y;
}
};

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
const std = @import("std"); const std = @import("std");
pub const types = @import("types.zig"); pub const types = @import("types.zig");
@ -109,9 +111,8 @@ fn serialize_raw_misc_module_export_v9(array: *std.ArrayList(u8), val: RawMiscMo
} }
fn serialize_raw_row_level_security_def_v9(array: *std.ArrayList(u8), val: RawRowLevelSecurityDefV9) !void { fn serialize_raw_row_level_security_def_v9(array: *std.ArrayList(u8), val: RawRowLevelSecurityDefV9) !void {
_ = array; try array.appendSlice(&std.mem.toBytes(@as(u32, @intCast(val.sql.len))));
_ = val; try array.appendSlice(val.sql);
unreachable;
} }
fn serialize_raw_index_algorithm(array: *std.ArrayList(u8), val: RawIndexAlgorithm) !void { fn serialize_raw_index_algorithm(array: *std.ArrayList(u8), val: RawIndexAlgorithm) !void {
@ -153,9 +154,18 @@ fn serialize_raw_constraint_def_v9(array: *std.ArrayList(u8), val: RawConstraint
} }
fn serialize_raw_sequence_def_v9(array: *std.ArrayList(u8), val: RawSequenceDefV9) !void { fn serialize_raw_sequence_def_v9(array: *std.ArrayList(u8), val: RawSequenceDefV9) !void {
_ = array; try array.appendSlice(&[_]u8{ @intFromBool(val.name == null) });
_ = val; if(val.name) |name| {
unreachable; try array.appendSlice(&std.mem.toBytes(@as(u32, @intCast(name.len))));
try array.appendSlice(name);
}
try array.appendSlice(&std.mem.toBytes(@as(u16, @intCast(val.column))));
try array.appendSlice(&[_]u8{ @intFromBool(val.start == null) });
try array.appendSlice(&[_]u8{ @intFromBool(val.min_value == null) });
if(val.min_value != null) undefined;
try array.appendSlice(&[_]u8{ @intFromBool(val.max_value == null) });
if(val.max_value != null) undefined;
try array.appendSlice(&std.mem.toBytes(@as(i128, @intCast(val.increment))));
} }
fn serialize_raw_schedule_def_v9(array: *std.ArrayList(u8), val: RawScheduleDefV9) !void { fn serialize_raw_schedule_def_v9(array: *std.ArrayList(u8), val: RawScheduleDefV9) !void {

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
const std = @import("std"); const std = @import("std");
const utils = @import("utils.zig"); const utils = @import("utils.zig");
const spacetime = @import("../spacetime.zig"); const spacetime = @import("../spacetime.zig");
@ -285,10 +287,10 @@ pub fn UnionDeserializer(union_type: type) fn(allocator: std.mem.Allocator, *[]c
}.deserialize; }.deserialize;
} }
pub fn StructDeserializer(struct_type: type) fn(allocator: std.mem.Allocator, *[]const u8) std.mem.Allocator.Error!*struct_type { pub fn StructDeserializer(struct_type: type) fn(allocator: std.mem.Allocator, *[]u8) std.mem.Allocator.Error!struct_type {
return struct { return struct {
pub fn deserialize(allocator: std.mem.Allocator, data: *[]const u8) std.mem.Allocator.Error!*struct_type { pub fn deserialize(allocator: std.mem.Allocator, data: *[]u8) std.mem.Allocator.Error!struct_type {
const ret = try allocator.create(struct_type); var ret: struct_type = undefined;
var offset_mem = data.*; var offset_mem = data.*;
const fields = std.meta.fields(struct_type); const fields = std.meta.fields(struct_type);
inline for(fields) |field| { inline for(fields) |field| {
@ -296,22 +298,21 @@ pub fn StructDeserializer(struct_type: type) fn(allocator: std.mem.Allocator, *[
[]const u8 => { []const u8 => {
const len = std.mem.bytesAsValue(u32, offset_mem[0..4]).*; const len = std.mem.bytesAsValue(u32, offset_mem[0..4]).*;
const str = try allocator.dupe(u8, offset_mem[4..(4+len)]); const str = try allocator.dupe(u8, offset_mem[4..(4+len)]);
@field(ret.*, field.name) = str; @field(ret, field.name) = str;
offset_mem = offset_mem[4+len ..]; offset_mem = offset_mem[4+len ..];
}, },
i8, u8, i16, u16, i32, u32, i8, u8, i16, u16, i32, u32,
i64, u64, i128, u128, i256, u256, i64, u64, i128, u128, i256, u256,
f32, f64 => { f32, f64 => {
std.log.debug("field_type: {} (offset_mem.len: {})", .{field.type, offset_mem.len}); @field(ret, field.name) = std.mem.bytesAsValue(field.type, offset_mem[0..@sizeOf(field.type)]).*;
@field(ret.*, field.name) = std.mem.bytesAsValue(field.type, offset_mem[0..@sizeOf(field.type)]).*;
offset_mem = offset_mem[@sizeOf(field.type)..]; offset_mem = offset_mem[@sizeOf(field.type)..];
}, },
else => blk: { else => blk: {
if(@typeInfo(field.type) == .@"struct") { if(@typeInfo(field.type) == .@"struct") {
@field(ret.*, field.name) = (try StructDeserializer(field.type)(allocator, &offset_mem)).*; @field(ret, field.name) = try StructDeserializer(field.type)(allocator, &offset_mem);
break :blk; break :blk;
} else if(@typeInfo(field.type) == .@"union") { } else if(@typeInfo(field.type) == .@"union") {
@field(ret.*, field.name) = (try UnionDeserializer(field.type)(allocator, &offset_mem)).*; @field(ret, field.name) = try UnionDeserializer(field.type)(allocator, &offset_mem);
break :blk; break :blk;
} }
@compileLog(field.type); @compileLog(field.type);
@ -320,7 +321,7 @@ pub fn StructDeserializer(struct_type: type) fn(allocator: std.mem.Allocator, *[
} }
} }
data.* = offset_mem; data.* = offset_mem;
std.log.debug("StructDeserializer Ended!", .{}); //std.log.debug("StructDeserializer Ended!", .{});
return ret; return ret;
} }
}.deserialize; }.deserialize;
@ -341,21 +342,47 @@ pub fn Iter(struct_type: type) type {
return struct { return struct {
allocator: std.mem.Allocator, allocator: std.mem.Allocator,
handle: spacetime.RowIter, handle: spacetime.RowIter,
buffer: [0x20_000]u8 = undefined, buffer: []u8,
contents: ?[]u8 = null, contents: []u8,
last_ret: SpacetimeValue = .OK, last_ret: SpacetimeValue = .OK,
inited: bool = false,
pub fn next(self: *@This()) spacetime.ReducerError!?*struct_type { pub fn init(allocator: std.mem.Allocator, rowIter: spacetime.RowIter) !@This() {
const buffer = try allocator.alloc(u8, 0x20_000);
return .{
.allocator = allocator,
.handle = rowIter,
.buffer = buffer,
.contents = buffer[0..0],
.inited = true,
};
}
pub fn next(self: *@This()) spacetime.ReducerError!?struct_type {
var buffer_len: usize = undefined; var buffer_len: usize = undefined;
var ret: spacetime.SpacetimeValue = self.last_ret; var ret: spacetime.SpacetimeValue = self.last_ret;
if(self.contents == null or self.contents.?.len == 0) { blk: while(true) {
if(self.contents.len == 0) {
if(self.handle._inner == spacetime.RowIter.INVALID._inner) { if(self.handle._inner == spacetime.RowIter.INVALID._inner) {
self.contents = null;
return null; return null;
} }
buffer_len = self.buffer.len; buffer_len = self.buffer.len;
ret = try spacetime.retMap(spacetime.row_iter_bsatn_advance(self.handle, @constCast(@ptrCast(&self.buffer)), &buffer_len)); ret = spacetime.retMap(spacetime.row_iter_bsatn_advance(self.handle, self.buffer.ptr, &buffer_len)) catch |err| {
switch(err) {
SpacetimeError.BUFFER_TOO_SMALL => {
self.buffer = try self.allocator.realloc(self.buffer, buffer_len);
continue :blk;
},
SpacetimeError.NO_SUCH_ITER => {
return SpacetimeError.NO_SUCH_ITER;
},
else => {
return SpacetimeError.UNKNOWN;
}
}
};
self.contents = self.buffer[0..buffer_len]; self.contents = self.buffer[0..buffer_len];
if(ret == .EXHAUSTED) { if(ret == .EXHAUSTED) {
@ -363,30 +390,34 @@ pub fn Iter(struct_type: type) type {
} }
self.last_ret = ret; self.last_ret = ret;
} }
if(self.contents == null or self.contents.?.len == 0) { if(self.contents.len == 0) {
return null; return null;
} }
return StructDeserializer(struct_type)(self.allocator, &(self.contents.?)); var offset = self.contents;
} const retValue = try StructDeserializer(struct_type)(self.allocator, &offset);
self.contents = offset;
pub fn one_or_null(self: *@This()) ?*struct_type { return retValue;
defer self.close(); }
return self.next() catch null;
} }
pub fn close(self: *@This()) void { pub fn close(self: *@This()) void {
if (self.handle.invalid())
{
_ = spacetime.row_iter_bsatn_close(self.handle); _ = spacetime.row_iter_bsatn_close(self.handle);
self.handle = spacetime.RowIter.INVALID; self.handle = spacetime.RowIter.INVALID;
self.contents = null; }
self.contents = undefined;
self.allocator.free(self.buffer);
} }
}; };
} }
pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]const u8) type { pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]const u8) type {
const table = blk: { const table = blk: {
for(spacetime.tables) |table| { for(spacetime.globalSpec.tables) |table| {
if(std.mem.eql(u8, table_name, table.name.?)) { if(std.mem.eql(u8, table_name, table.name)) {
break :blk table; break :blk table;
} }
} }
@ -402,7 +433,7 @@ pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]con
.fields = &.{ .fields = &.{
std.builtin.Type.StructField{ std.builtin.Type.StructField{
.alignment = @alignOf(column_type), .alignment = @alignOf(column_type),
.default_value = null, .default_value_ptr = null,
.is_comptime = false, .is_comptime = false,
.name = column_name, .name = column_name,
.type = column_type, .type = column_type,
@ -420,7 +451,8 @@ pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]con
const temp_name: []const u8 = comptime table_name ++ "_" ++ column_name ++ "_idx_btree"; const temp_name: []const u8 = comptime table_name ++ "_" ++ column_name ++ "_idx_btree";
var id = spacetime.IndexId{ ._inner = std.math.maxInt(u32)}; var id = spacetime.IndexId{ ._inner = std.math.maxInt(u32)};
const err = try spacetime.retMap(spacetime.index_id_from_name(temp_name.ptr, temp_name.len, &id)); const err = try spacetime.retMap(spacetime.index_id_from_name(temp_name.ptr, temp_name.len, &id));
std.log.debug("index_id_from_name({}): {x}", .{err, id._inner}); _ = err;
//std.log.debug("index_id_from_name({}): {x}", .{err, id._inner});
const nVal: struct{ bounds: BoundVariant, val: wrapped_type } = .{ const nVal: struct{ bounds: BoundVariant, val: wrapped_type } = .{
.bounds = .Inclusive, .bounds = .Inclusive,
@ -428,9 +460,9 @@ pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]con
}; };
const size: usize = getStructSize(nVal); const size: usize = getStructSize(nVal);
const mem = try self.allocator.alloc(u8, size); const mem = try self.allocator.alignedAlloc(u8, 1, size);
var offset_mem = mem;
defer self.allocator.free(mem); defer self.allocator.free(mem);
var offset_mem = mem;
getStructData(nVal, &offset_mem); getStructData(nVal, &offset_mem);
const data = mem[0..size]; const data = mem[0..size];
@ -448,15 +480,12 @@ pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]con
&rowIter &rowIter
)); ));
return Iter(struct_type){ return Iter(struct_type).init(self.allocator, rowIter);
.allocator = self.allocator,
.handle = rowIter,
};
} }
pub fn find(self: @This(), val: wrapped_type) !?*struct_type { pub fn find(self: @This(), val: wrapped_type) !?struct_type {
var iter = try self.filter(val); var iter = try self.filter(val);
return iter.one_or_null(); return try iter.next();
} }
pub fn delete(self: @This(), val: wrapped_type) !void { pub fn delete(self: @This(), val: wrapped_type) !void {
@ -471,8 +500,8 @@ pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]con
const size: usize = getStructSize(nVal); const size: usize = getStructSize(nVal);
const mem = try self.allocator.alloc(u8, size); const mem = try self.allocator.alloc(u8, size);
var offset_mem = mem;
defer self.allocator.free(mem); defer self.allocator.free(mem);
var offset_mem = mem;
getStructData(nVal, &offset_mem); getStructData(nVal, &offset_mem);
const data = mem[0..size]; const data = mem[0..size];
@ -490,40 +519,100 @@ pub fn Column2ORM(comptime table_name: []const u8, comptime column_name: [:0]con
&deleted_fields &deleted_fields
); );
} }
pub fn update(self: @This(), val: struct_type) !void {
var table_id: TableId = undefined;
_ = spacetime.table_id_from_name(table_name.ptr, table_name.len, &table_id);
const temp_name: []const u8 = table_name ++ "_" ++ column_name ++ "_idx_btree";
var index_id = spacetime.IndexId{ ._inner = std.math.maxInt(u32) };
_ = spacetime.index_id_from_name(temp_name.ptr, temp_name.len, &index_id);
const size: usize = getStructSize(val);
const mem = try self.allocator.alloc(u8, size);
defer self.allocator.free(mem);
var offset_mem = mem;
getStructData(val, &offset_mem);
const data = mem[0..size];
var data_len = data.len;
_ = spacetime.datastore_update_bsatn(
table_id,
index_id,
data.ptr,
&data_len
);
}
}; };
} }
pub fn AutoIncStruct(base: type, autoincs: []const [:0]const u8) type {
return @Type(.{
.@"struct" = std.builtin.Type.Struct{
.backing_integer = null,
.decls = &.{},
.is_tuple = false,
.layout = .auto,
.fields = blk: {
var fields: []const std.builtin.Type.StructField = &.{};
for(autoincs) |autoinc| {
const member_type = utils.getMemberDefaultType(base, autoinc);
fields = fields ++ &[_]std.builtin.Type.StructField{
std.builtin.Type.StructField{
.is_comptime = false,
.name = autoinc,
.default_value_ptr = null,
.type = member_type,
.alignment = 0,
}
};
}
break :blk fields;
}
}
});
}
pub fn Table2ORM(comptime table_name: []const u8) type { pub fn Table2ORM(comptime table_name: []const u8) type {
const table = blk: { const table = blk: {
for(spacetime.tables) |table| { for(spacetime.globalSpec.tables) |table| {
if(std.mem.eql(u8, table_name, table.name.?)) { if(std.mem.eql(u8, table_name, table.name)) {
break :blk table; break :blk table;
} }
} }
@compileError("Table " ++ table_name ++ " not found!");
}; };
const struct_type = table.schema; const struct_type = table.schema;
const autoinc_return_type = AutoIncStruct(struct_type, table.attribs.autoinc orelse &.{});
return struct { return struct {
allocator: std.mem.Allocator, allocator: std.mem.Allocator,
pub fn insert(self: @This(), data: struct_type) !void { pub fn insert(self: @This(), data: struct_type) !struct_type {
var id: TableId = undefined; var id: TableId = undefined;
_ = spacetime.table_id_from_name(table_name.ptr, table_name.len, &id); _ = spacetime.table_id_from_name(table_name.ptr, table_name.len, &id);
const raw_data = try StructSerializer(struct_type)(self.allocator, data); var raw_data = try StructSerializer(struct_type)(self.allocator, data);
defer self.allocator.free(raw_data); defer self.allocator.free(raw_data);
var raw_data_len: usize = raw_data.len; var raw_data_len: usize = raw_data.len;
_ = spacetime.datastore_insert_bsatn(id, raw_data.ptr, &raw_data_len); _ = spacetime.datastore_insert_bsatn(id, raw_data.ptr, &raw_data_len);
var data_copy = data;
const out = try StructDeserializer(autoinc_return_type)(self.allocator, &raw_data);
inline for(std.meta.fields(autoinc_return_type)) |field| {
@field(data_copy, field.name) = @field(out, field.name);
} }
pub fn iter(self: @This()) Iter(struct_type) { return data_copy;
}
pub fn iter(self: @This()) !Iter(struct_type) {
var id: TableId = undefined; var id: TableId = undefined;
_ = spacetime.table_id_from_name(table_name.ptr, table_name.len, &id); _ = spacetime.table_id_from_name(table_name.ptr, table_name.len, &id);
var rowIter: spacetime.RowIter = undefined; var rowIter: spacetime.RowIter = undefined;
_ = spacetime.datastore_table_scan_bsatn(id, &rowIter); _ = spacetime.datastore_table_scan_bsatn(id, &rowIter);
return Iter(struct_type){ return Iter(struct_type).init(self.allocator, rowIter);
.allocator = self.allocator,
.handle = rowIter,
};
} }
pub fn col(self: @This(), comptime column_name: [:0]const u8) Column2ORM(table_name, column_name) { pub fn col(self: @This(), comptime column_name: [:0]const u8) Column2ORM(table_name, column_name) {
@ -531,24 +620,36 @@ pub fn Table2ORM(comptime table_name: []const u8) type {
.allocator = self.allocator, .allocator = self.allocator,
}; };
} }
pub fn count(self: @This()) !u64 {
_ = self;
var id: TableId = undefined;
_ = spacetime.table_id_from_name(table_name.ptr, table_name.len, &id);
var val: u64 = undefined;
_ = try spacetime.retMap(spacetime.datastore_table_row_count(id, &val));
return val;
}
}; };
} }
pub const Local = struct { pub const Local = struct {
allocator: std.mem.Allocator, allocator: std.mem.Allocator,
frame_allocator: std.mem.Allocator,
pub fn get(self: @This(), comptime table: []const u8) Table2ORM(table) { pub fn get(self: @This(), comptime table: []const u8) Table2ORM(table) {
return .{ return .{
.allocator = self.allocator, .allocator = self.frame_allocator,
}; };
} }
}; };
pub const ReducerContext = struct { pub const ReducerContext = struct {
allocator: std.mem.Allocator,
sender: spacetime.Identity, sender: spacetime.Identity,
timestamp: spacetime.Timestamp, timestamp: spacetime.Timestamp,
connection_id: spacetime.ConnectionId, connection_id: spacetime.ConnectionId,
db: Local, db: Local,
rng: std.Random.DefaultPrng = std.Random.DefaultPrng.init(0),
}; };
pub const ReducerFn = fn(*ReducerContext) void; pub const ReducerFn = fn(*ReducerContext) void;
@ -574,7 +675,7 @@ pub const RawMiscModuleExportV9 = enum {
RESERVED, RESERVED,
}; };
pub const RawSql = []u8; pub const RawSql = Str;
pub const RawRowLevelSecurityDefV9 = struct { pub const RawRowLevelSecurityDefV9 = struct {
sql: RawSql, sql: RawSql,

View file

@ -1,3 +1,5 @@
// Copyright 2025 Tyler Peterson, Licensed under MPL-2.0
const std = @import("std"); const std = @import("std");
pub fn getMemberDefaultType(t: type, comptime member: []const u8) type { pub fn getMemberDefaultType(t: type, comptime member: []const u8) type {