Compare commits

...

27 Commits
main ... ref1

Author SHA1 Message Date
Kacper Marzecki
324735710b checkpoint sleep 2025-06-24 01:06:40 +02:00
Kacper Marzecki
439747daf0 dirty hack fix 2025-06-24 00:43:41 +02:00
Kacper Marzecki
552ef6ce3b checkpoint better rec types 2025-06-23 23:50:30 +02:00
Kacper Marzecki
07d92aed97 cleanup 2025-06-23 19:26:04 +02:00
Kacper Marzecki
6c243439a9 trace 2025-06-23 17:34:51 +02:00
Kacper Marzecki
a42c981fa0 checkpoint but debug is still kinda fucked 2025-06-21 15:16:16 +02:00
Kacper Marzecki
8a6f84238e kinda works 2025-06-20 17:39:23 +02:00
Kacper Marzecki
bb7187b0c7 some progress, still fucked 2025-06-20 16:43:19 +02:00
Kacper Marzecki
3c7edc67da fuckd 2025-06-20 16:19:02 +02:00
Kacper Marzecki
8d8b3607fc checkpoint, still fucked debugging 2025-06-20 15:52:18 +02:00
Kacper Marzecki
a78fe0541a wip debug 2025-06-20 10:28:53 +02:00
Kacper Marzecki
f1243084c7 checkpoint docs 2025-06-19 18:42:34 +02:00
Kacper Marzecki
cb8dddc326 chat 2025-06-19 13:17:18 +02:00
Kacper Marzecki
646afaad64 fixpoint discussion 2025-06-19 02:35:42 +02:00
Kacper Marzecki
658332ace1 some progress with recursive checks 2025-06-19 02:34:34 +02:00
Kacper Marzecki
bcddae26cb checkpoint before next attempt at recursive types - noice debugging here 2025-06-19 01:49:08 +02:00
Kacper Marzecki
ff557cb221 checkpoint 2025-06-19 01:03:09 +02:00
Kacper Marzecki
176a4c6a04 maybe?? revert if not recursive works 2025-06-18 21:14:22 +02:00
Kacper Marzecki
7fa469518d consistency fixed 2025-06-18 16:13:29 +02:00
Kacper Marzecki
5736c30fa2 fix consistency checker 2025-06-18 16:01:50 +02:00
Kacper Marzecki
36f63ed355 checkpoint before fixing consistency tests 2025-06-18 15:56:12 +02:00
Kacper Marzecki
130d369734 checkpoint 2025-06-18 15:44:49 +02:00
Kacper Marzecki
a5e5127bcd reconstructor & normalization 2025-06-18 15:10:58 +02:00
Kacper Marzecki
9f6cd2814a checkpoint 2025-06-18 15:00:36 +02:00
Kacper Marzecki
0880bb338f checkpoint new implementation 2025-06-18 14:29:40 +02:00
Kacper Marzecki
967eb93aa8 checkpoint 2025-06-17 19:48:50 +02:00
Kacper Marzecki
8b6171283d checkpoint 2025-06-17 12:56:04 +02:00
11 changed files with 6786 additions and 1321 deletions

View File

@ -57438,3 +57438,141 @@ defmodule TupleTestHelpers do
> Tokens: 25k sent, 457 received. Cost: $0.04 message, $0.08 session.
> Applied edit to test.exs
#### /clear
#### /ask I've got a problem with debugging this thing. In some test cases I need to access a detailed trace of whats going on. Most of the time I don't care, because they pass. We need to add detailed logs which i can enable only for a select test case via a Process switch. Lets design it
#### I've got a problem with debugging this thing. In some test cases I need to access a detailed trace of whats going on. Most of the time I don't care, because they pass. We need to add detailed logs which i can enable only for a select test case via a Process switch. Lets design it
Okay, let's design a conditional logging mechanism for detailed tracing.
**Design Proposal: Conditional Debug Tracing**
We'll introduce a tracing facility within the existing `Tdd.Debug` module. This facility will be controlled by a flag in the process dictionary, allowing tests to enable detailed logging on demand.
**1. Control Mechanism (in `Tdd.Debug` module):**
* **Process Flag:** A key, for example, `@trace_flag :tdd_debug_trace_enabled`, will be used with `Process.put/2` and `Process.get/2`.
* **Control Functions:**
* `enable_trace/0`: Sets the process flag to `true`.
```elixir
# In Tdd.Debug
@trace_flag :tdd_debug_trace_enabled
def enable_trace, do: Process.put(@trace_flag, true)
```
* `disable_trace/0`: Sets the process flag to `false` (or removes it).
```elixir
# In Tdd.Debug
def disable_trace, do: Process.put(@trace_flag, false)
```
* `trace_enabled?/0`: Checks the state of the flag. This can be private if only used by the macros.
```elixir
# In Tdd.Debug
defp trace_enabled?, do: Process.get(@trace_flag, false) == true
```
**2. Logging Macros (in `Tdd.Debug` module):**
To ensure that potentially expensive operations (like data inspection or string formatting) are only performed when tracing is active, the macros will accept zero-arity functions that produce the content to be logged.
* `dlog_msg(message_producer_fun)`: For logging simple string messages.
* `message_producer_fun`: A `fn -> "message string" end`.
* *Behavior*: If tracing is enabled, it executes `message_producer_fun.()` and prints the result using `IO.puts/1`.
```elixir
# In Tdd.Debug
defmacro dlog_msg(message_producer_fun) do
quote do
if Tdd.Debug.trace_enabled?() do
IO.puts(unquote(message_producer_fun).())
end
end
end
```
* `dlog(label, data_producer_fun)`: For inspecting data with a label.
* `label`: An atom or string for `IO.inspect/2`.
* `data_producer_fun`: A `fn -> data_to_inspect end`.
* *Behavior*: If tracing is enabled, it executes `data_producer_fun.()` and inspects the result using `IO.inspect(result, label: label)`.
```elixir
# In Tdd.Debug
defmacro dlog(label, data_producer_fun) do
quote do
if Tdd.Debug.trace_enabled?() do
IO.inspect(unquote(data_producer_fun).(), label: unquote(label))
end
end
end
```
* `dlog_action(action_producer_fun)`: For executing an action (like calling `Tdd.Debug.print/1`) only when tracing is enabled.
* `action_producer_fun`: A `fn -> perform_io_action() end`.
* *Behavior*: If tracing is enabled, it executes `action_producer_fun.()`.
```elixir
# In Tdd.Debug
defmacro dlog_action(action_producer_fun) do
quote do
if Tdd.Debug.trace_enabled?() do
unquote(action_producer_fun).()
:ok # Ensure the macro expression has a consistent return value
end
end
end
```
**3. Usage Example:**
* **In a specific test case:**
```elixir
defmodule MyProblematicTest do
use ExUnit.Case
alias Tdd.Debug
setup do
Debug.enable_trace()
on_exit(fn -> Debug.disable_trace() end) # Ensure cleanup
:ok
end
test "scenario requiring detailed trace" do
# ... test code that calls into TDD modules ...
assert true # placeholder
end
end
```
* **In core logic (e.g., `Tdd.Compiler.loop_until_stable/3`):**
The existing `IO.puts` and `IO.inspect` calls can be converted.
```elixir
# In Tdd.Compiler, after aliasing or requiring Tdd.Debug
# alias Tdd.Debug
defp loop_until_stable(prev_id, step_function, iteration \\ 0) do
Debug.dlog_msg(fn -> "\n--- Fixed-Point Iteration: #{iteration} ---" end)
Debug.dlog("prev_id", fn -> prev_id end)
Debug.dlog_action(fn -> Tdd.Debug.print(prev_id) end) # For detailed graph view
raw_next_id = step_function.(prev_id)
Debug.dlog("raw_next_id (after step_function)", fn -> raw_next_id end)
Debug.dlog_action(fn -> Tdd.Debug.print(raw_next_id) end)
next_id = Algo.simplify(raw_next_id)
Debug.dlog("next_id (after simplify)", fn -> next_id end)
Debug.dlog_action(fn -> Tdd.Debug.print(next_id) end)
if next_id == prev_id do
Debug.dlog_msg(fn -> "--- Fixed-Point Reached! ---" end)
next_id
else
# ... existing safety break and recursive call ...
if iteration > 2 do # Example existing condition
# ...
end
loop_until_stable(next_id, step_function, iteration + 1)
end
end
```
This design provides a flexible way to get detailed traces for specific scenarios without affecting the performance or output of regular test runs. The use of functions ensures lazy evaluation of potentially costly debug message formatting.
> Tokens: 32k sent, 1.3k received. Cost: $0.05 message, $0.12 session.

View File

@ -4185,3 +4185,9 @@
# 2025-06-16 11:51:48.065221
+n
# 2025-06-19 10:22:37.019564
+/clear
# 2025-06-19 10:25:14.825350
+/ask I've got a problem with debugging this thing. In some test cases I need to access a detailed trace of whats going on. Most of the time I don't care, because they pass. We need to add detailed logs which i can enable only for a select test case via a Process switch. Lets design it

514
debug.exs Normal file
View File

@ -0,0 +1,514 @@
defmodule Tdd.Debug.TracerData do
@moduledoc """
Provides functions to sanitize Elixir terms for tracing and JSON serialization.
"""
@max_sanitize_depth 5
def sanitize_values(list) when is_list(list) do
Enum.map(list, &sanitize_value/1)
end
def sanitize_value(val) do
do_sanitize(val, @max_sanitize_depth)
end
defp do_sanitize(val, _depth_limit) when is_integer(val) or is_float(val) or is_atom(val) or is_boolean(val), do: val
defp do_sanitize(val, _depth_limit) when is_binary(val), do: val # Assume strings are fine
defp do_sanitize(nil, _depth_limit), do: nil
defp do_sanitize(val, _depth_limit) when is_pid(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_port(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_reference(val), do: inspect(val)
defp do_sanitize(val, _depth_limit) when is_function(val) do
try do
arity = Function.info(val, :arity) |> elem(1)
case Function.info(val, :name) do
{:name, name} when is_atom(name) and name != :"" -> "<Function #{Atom.to_string(name)}/#{arity}>"
_ -> "<Function/#{arity}>"
end
catch
_, _ -> "<Function>"
end
end
defp do_sanitize(val, depth_limit) when is_list(val) do
if depth_limit <= 0 do
"[...trimmed_list...]"
else
Enum.map(val, &do_sanitize(&1, depth_limit - 1))
end
end
defp do_sanitize(val, depth_limit) when is_map(val) do
if depth_limit <= 0 do
"%{...trimmed_map...}"
else
if Map.has_key?(val, :__struct__) do
module = val.__struct__
# Sanitize only exposed fields, not internal :__meta__ or the :__struct__ key itself
data_to_sanitize = val |> Map.delete(:__struct__) |> Map.delete(:__meta__)
sanitized_fields =
data_to_sanitize
|> Enum.map(fn {k, v} -> {k, do_sanitize(v, depth_limit - 1)} end)
|> Enum.into(%{})
%{type: :struct, module: module, fields: sanitized_fields}
else # Regular map
val
|> Enum.map(fn {k, v} -> {do_sanitize(k, depth_limit - 1), do_sanitize(v, depth_limit - 1)} end)
|> Enum.into(%{})
end
end
end
defp do_sanitize(val, depth_limit) when is_tuple(val) do
if depth_limit <= 0 do
"{...trimmed_tuple...}"
else
val |> Tuple.to_list() |> Enum.map(&do_sanitize(&1, depth_limit - 1)) |> List.to_tuple()
end
end
# Fallback for other types (e.g., DateTime, Date, Time, NaiveDateTime)
defp do_sanitize(val, _depth_limit) do
# For common structs that have `to_string` or are Calendar types
cond do
is_struct(val, DateTime) or is_struct(val, Date) or is_struct(val, Time) or is_struct(val, NaiveDateTime) ->
inspect(val) # Jason can handle these if they are ISO8601 strings
true ->
# Generic struct or unknown type
if is_struct(val) do
"Struct<#{inspect(val.__struct__)}>" # A simpler representation
else
inspect(val) # Fallback, converts to string
end
end
end
def sanitize_error(exception_instance, stacktrace) do
%{
type: :error,
class: Atom.to_string(exception_instance.__struct__),
message: Exception.message(exception_instance),
# Sanitize stacktrace to keep it manageable
stacktrace: Enum.map(stacktrace, &Exception.format_stacktrace_entry/1) |> Enum.take(15)
}
end
end
defmodule Tdd.Debug do
@moduledoc """
Provides macros to wrap `def` and `defp` for function call/return tracing.
Logs arguments and return values using `IO.inspect`.
Builds an in-memory call tree data structure per traced process.
"""
# --- Agent for Tracing State ---
@agent_name Tdd.Debug.StateAgent
def init_agent_if_needed do
case Process.whereis(@agent_name) do
nil -> Agent.start_link(fn -> MapSet.new() end, name: @agent_name)
_pid -> :ok
end
:ok
end
def add_traced_pid(pid) when is_pid(pid) do
init_agent_if_needed()
Agent.update(@agent_name, &MapSet.put(&1, pid))
end
def remove_traced_pid(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil -> :ok
agent_pid -> Agent.cast(agent_pid, fn state -> MapSet.delete(state, pid) end)
end
end
def is_pid_traced?(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil ->
false
agent_pid ->
try do
Agent.get(agent_pid, &MapSet.member?(&1, pid), :infinity)
rescue
_e in [Exit, ArgumentError] -> false # Agent might have died
end
end
end
# --- Tracing Data Storage ---
@doc """
Initializes the tracing data structures in the process dictionary.
"""
def init_trace_storage do
Process.put(:tdd_debug_call_stack, [])
Process.put(:tdd_debug_session_roots, [])
:ok
end
@doc """
Retrieves the collected trace data (a list of root call tree nodes)
for the current process and clears it from the process dictionary.
Children within each node and the root list itself are reversed to be in chronological order.
"""
def get_and_clear_trace_data do
data = Process.get(:tdd_debug_session_roots, [])
# Reverse children recursively and then reverse the list of roots
processed_data = Enum.map(data, &reverse_children_recursively/1) |> Enum.reverse()
Process.delete(:tdd_debug_session_roots)
Process.delete(:tdd_debug_call_stack)
processed_data
end
defp reverse_children_recursively(node) do
if !node.children or node.children == [] do
node
else
reversed_and_processed_children =
Enum.map(node.children, &reverse_children_recursively/1)
|> Enum.reverse() # Children were added prepending, so reverse for chronological
%{node | children: reversed_and_processed_children}
end
end
# --- Tracing Control Functions ---
@doc "Enables function call tracing for the current process."
def enable_tracing do
pid_to_trace = self()
add_traced_pid(pid_to_trace)
init_trace_storage() # Initialize storage for call trees
ref = Process.monitor(pid_to_trace)
Process.spawn(fn ->
receive do
{:DOWN, ^ref, :process, ^pid_to_trace, _reason} ->
remove_traced_pid(pid_to_trace)
after
3_600_000 -> # 1 hour safety timeout
remove_traced_pid(pid_to_trace)
end
end, [:monitor]) # Changed from [:monitor] to [] as monitor is implicit with Process.monitor
:ok
end
@doc "Disables function call tracing for the current process."
def disable_tracing do
remove_traced_pid(self())
# Note: Does not clear trace data, get_and_clear_trace_data() does that.
:ok
end
@doc """
Runs the given 0-arity function with tracing enabled.
Returns a tuple: `{{:ok, result} | {:error, {exception, stacktrace}}, trace_data}`.
Trace data is a list of call tree root nodes.
"""
def run(fun) when is_function(fun, 0) do
enable_tracing()
outcome =
try do
{:ok, fun.()}
rescue
kind -> {:error, {kind, __STACKTRACE__}}
end
trace_data = get_and_clear_trace_data()
disable_tracing()
# trace_data
# |> IO.inspect()
binary = JSON.encode!(trace_data)
File.write("trace.json", binary)
{outcome, trace_data}
end
# --- Process Dictionary for Call Depth (used for printing indent) ---
defp get_depth, do: Process.get(:tdd_debug_depth, 0)
def increment_depth do
new_depth = get_depth() + 1
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
def decrement_depth do
new_depth = max(0, get_depth() - 1)
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
# --- Core Macro Logic ---
defmacro __using__(_opts) do
quote do
import Kernel, except: [def: 1, def: 2, defp: 1, defp: 2]
import Tdd.Debug # Import this module's public functions/macros
require Tdd.Debug # Ensure this module is compiled for macros
end
end
@doc false
defmacro def(call, clauses \\ Keyword.new()) do
generate_traced_function(:def, call, clauses, __CALLER__)
end
@doc false
defmacro defp(call, clauses \\ Keyword.new()) do
generate_traced_function(:defp, call, clauses, __CALLER__)
end
defp is_simple_variable_ast?(ast_node) do
case ast_node do
{var_name, _meta, context_module} when is_atom(var_name) and (context_module == nil or is_atom(context_module)) ->
var_name != :_ # Ensure it's not the underscore atom itself
_ ->
false
end
end
defp generate_traced_function(type, call_ast, clauses, caller_env) do
require Macro
{function_name_ast, meta_call, original_args_patterns_ast_nullable} = call_ast
original_args_patterns_ast_list = original_args_patterns_ast_nullable || []
original_body_ast =
(if Keyword.keyword?(clauses) do
Keyword.get(clauses, :do, clauses)
else
clauses
end) || quote(do: nil)
# --- Argument Handling for Logging (Existing Logic) ---
mapped_and_generated_vars_tuples =
Enum.map(Enum.with_index(original_args_patterns_ast_list), fn {original_pattern_ast, index} ->
td_arg_var = Macro.var(String.to_atom("__td_arg_#{index}__"), nil)
{final_pattern_for_head, rhs_for_td_arg_assignment} =
case original_pattern_ast do
{:_, _, _} = underscore_ast -> {underscore_ast, quote(do: :__td_ignored_argument__)}
{:=, _meta_assign, [lhs_of_assign, _rhs_of_assign]} = assignment_pattern_ast ->
if is_simple_variable_ast?(lhs_of_assign) do
{assignment_pattern_ast, lhs_of_assign}
else
captured_val_var = Macro.unique_var(String.to_atom("tdc_assign_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(assignment_pattern_ast) end
{new_head_pattern, captured_val_var}
end
{:\\, _meta_default, [pattern_before_default, _default_value_ast]} = default_arg_pattern_ast ->
cond do
is_simple_variable_ast?(pattern_before_default) ->
{default_arg_pattern_ast, pattern_before_default}
match?({:=, _, [lhs_inner_assign, _]}, pattern_before_default) and is_simple_variable_ast?(pattern_before_default |> elem(2) |> Enum.at(0)) ->
{:=, _, [lhs_inner_assign, _]} = pattern_before_default
{default_arg_pattern_ast, lhs_inner_assign}
true ->
captured_val_var = Macro.unique_var(String.to_atom("tdc_def_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(default_arg_pattern_ast) end
{new_head_pattern, captured_val_var}
end
ast_node ->
if is_simple_variable_ast?(ast_node) do
{ast_node, ast_node}
else
captured_val_var = Macro.unique_var(String.to_atom("tdc_pat_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(ast_node) end
{new_head_pattern, captured_val_var}
end
end
assignment_ast = quote do unquote(td_arg_var) = unquote(rhs_for_td_arg_assignment) end
{final_pattern_for_head, assignment_ast, td_arg_var}
end)
{new_args_patterns_for_head_list, assignments_for_logging_vars_ast_list, generated_vars_to_log_asts} =
if mapped_and_generated_vars_tuples == [] do
{[], [], []}
else
# Transpose list of 3-element tuples into 3 lists
list_of_lists = Enum.map(mapped_and_generated_vars_tuples, &Tuple.to_list(&1))
[
Enum.map(list_of_lists, &List.first(&1)),
Enum.map(list_of_lists, &Enum.at(&1, 1)),
Enum.map(list_of_lists, &Enum.at(&1, 2))
]
|> then(fn [a,b,c] -> {a,b,c} end)
end
new_call_ast = {function_name_ast, meta_call, new_args_patterns_for_head_list}
# --- Variables for Runtime Info ---
# These vars will hold string versions of module/function name and arity at runtime
# Hygienic vars to avoid collisions.
module_name_runtime_var = Macro.var(:__td_module_name__, __MODULE__)
printable_fn_name_runtime_var = Macro.var(:__td_printable_fn_name__, __MODULE__)
arity_runtime_var = Macro.var(:__td_arity__, __MODULE__)
# Arity calculation at macro expansion time
arity_value = length(original_args_patterns_ast_list)
traced_body_inner_ast =
quote do
# --- Resolve Module/Function/Arity Info at Runtime ---
unquote(module_name_runtime_var) = Atom.to_string(unquote(caller_env.module))
unquote(arity_runtime_var) = unquote(arity_value)
runtime_resolved_fn_name_val = unquote(function_name_ast) # Resolve var if func name is from a var
unquote(printable_fn_name_runtime_var) =
if is_atom(runtime_resolved_fn_name_val) do
Atom.to_string(runtime_resolved_fn_name_val)
else
Macro.to_string(runtime_resolved_fn_name_val) # For unquote(var) or operators
end
# --- Main Tracing Logic ---
if Tdd.Debug.is_pid_traced?(self()) do
# --- On Entry: Prepare Data & Log ---
unquote_splicing(assignments_for_logging_vars_ast_list) # Assign __td_arg_N__ vars
runtime_arg_values_for_node_and_log = [unquote_splicing(generated_vars_to_log_asts)]
# sanitized_args_for_node = Tdd.Debug.TracerData.sanitize_values(runtime_arg_values_for_node_and_log)
sanitized_args_for_node = inspect(runtime_arg_values_for_node_and_log)
current_call_print_depth = Tdd.Debug.increment_depth() # For print indent & node depth
# Create placeholder node, push to call_stack
new_node_details = %{
id: System.unique_integer([:positive, :monotonic]),
# module: unquote(module_name_runtime_var),
# function: unquote(printable_fn_name_runtime_var),
function: "#{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}",
# arity: unquote(arity_runtime_var),
args: sanitized_args_for_node,
depth: current_call_print_depth,
# timestamp_enter_monotonic: System.monotonic_time(),
children: [], # Will be populated in reverse chronological order
result: :__td_not_returned_yet__ # Placeholder
}
Process.put(:tdd_debug_call_stack, [new_node_details | Process.get(:tdd_debug_call_stack, [])])
# Logging Call (existing logic)
indent = String.duplicate(" ", current_call_print_depth - 1)
IO.puts("#{indent}CALL: #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}")
IO.puts("#{indent} ARGS: #{inspect(runtime_arg_values_for_node_and_log)}")
try do
# --- Execute Original Body ---
result_val = unquote(original_body_ast)
# --- On Normal Exit: Finalize Node & Log ---
[completed_node_placeholder | parent_stack_nodes] = Process.get(:tdd_debug_call_stack)
ts_exit_monotonic = System.monotonic_time()
# duration_ms = System.convert_time_unit(ts_exit_monotonic - completed_node_placeholder.timestamp_enter_monotonic, :native, :millisecond)
sanitized_result_val = Tdd.Debug.TracerData.sanitize_value(result_val)
finalized_node =
completed_node_placeholder
# |> Map.put(:timestamp_exit_monotonic, ts_exit_monotonic)
# |> Map.put(:duration_ms, duration_ms)
|> Map.put(:result, %{type: :ok, value: sanitized_result_val})
_ = Tdd.Debug.decrement_depth() # For print indent
# Update call stack: Add finalized_node to parent's children or session_roots
new_call_stack_after_success =
case parent_stack_nodes do
[parent_on_stack | grand_parent_stack_nodes] ->
updated_parent = %{parent_on_stack | children: [finalized_node | parent_on_stack.children]}
[updated_parent | grand_parent_stack_nodes]
[] -> # This was a root call
Process.put(:tdd_debug_session_roots, [finalized_node | Process.get(:tdd_debug_session_roots, [])])
[] # New call_stack is empty for this branch
end
Process.put(:tdd_debug_call_stack, new_call_stack_after_success)
# Logging Return
IO.puts("#{indent}RETURN from #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}: #{inspect(result_val)}")
result_val # Return actual result
rescue
exception_class_err ->
# --- On Error Exit: Finalize Node & Log ---
error_instance_err = exception_class_err
stacktrace_err = __STACKTRACE__
[erroring_node_placeholder | parent_stack_nodes_err] = Process.get(:tdd_debug_call_stack)
ts_exit_monotonic_err = System.monotonic_time()
duration_ms_err = System.convert_time_unit(ts_exit_monotonic_err - erroring_node_placeholder.timestamp_enter_monotonic, :native, :millisecond)
sanitized_error_info = Tdd.Debug.TracerData.sanitize_error(error_instance_err, stacktrace_err)
finalized_error_node =
erroring_node_placeholder
|> Map.put(:timestamp_exit_monotonic, ts_exit_monotonic_err)
|> Map.put(:duration_ms, duration_ms_err)
|> Map.put(:result, sanitized_error_info)
_ = Tdd.Debug.decrement_depth() # For print indent
# Update call stack: Add finalized_error_node to parent's children or session_roots
new_call_stack_after_error =
case parent_stack_nodes_err do
[parent_on_stack_err | grand_parent_stack_nodes_err] ->
updated_parent_err = %{parent_on_stack_err | children: [finalized_error_node | parent_on_stack_err.children]}
[updated_parent_err | grand_parent_stack_nodes_err]
[] -> # This was a root call that errored
Process.put(:tdd_debug_session_roots, [finalized_error_node | Process.get(:tdd_debug_session_roots, [])])
[] # New call_stack is empty for this branch
end
Process.put(:tdd_debug_call_stack, new_call_stack_after_error)
# Logging Error
IO.puts("#{indent}ERROR in #{unquote(module_name_runtime_var)}.#{unquote(printable_fn_name_runtime_var)}: #{inspect(error_instance_err)}")
reraise error_instance_err, stacktrace_err # Reraise the original error
end
else
# --- Not Traced: Execute Original Body Directly ---
unquote(original_body_ast)
end
end
# --- Final Definition ---
final_definition_ast =
quote location: :keep do
Kernel.unquote(type)(
unquote(new_call_ast),
do: unquote(traced_body_inner_ast)
)
end
final_definition_ast
end
# --- TDD Graph Printing (Kept as it was, assuming it's for a different purpose) ---
@doc "Prints a formatted representation of a TDD graph structure."
def print_tdd_graph(id, store_module \\ Tdd.Store) do
IO.puts("--- TDD Graph (ID: #{id}) ---")
do_print_tdd_node(id, 0, MapSet.new(), store_module)
IO.puts("------------------------")
end
defp do_print_tdd_node(id, indent_level, visited, store_module) do
prefix = String.duplicate(" ", indent_level)
if MapSet.member?(visited, id) do
IO.puts("#{prefix}ID #{id} -> [Seen, recursive link]")
:ok
else
new_visited = MapSet.put(visited, id)
case store_module.get_node(id) do # Assumes store_module.get_node/1 exists
{:ok, :true_terminal} -> IO.puts("#{prefix}ID #{id} -> TRUE")
{:ok, :false_terminal} -> IO.puts("#{prefix}ID #{id} -> FALSE")
{:ok, {var, y_id, n_id, dc_id}} ->
IO.puts("#{prefix}ID #{id}: IF #{inspect(var)}")
IO.puts("#{prefix} ├─ Yes (to ID #{y_id}):")
do_print_tdd_node(y_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} ├─ No (to ID #{n_id}):")
do_print_tdd_node(n_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} └─ DC (to ID #{dc_id}):")
do_print_tdd_node(dc_id, indent_level + 2, new_visited, store_module)
{:error, reason} ->
IO.puts("#{prefix}ID #{id}: ERROR - #{reason}")
end
end
end
end

10
debug.md Normal file
View File

@ -0,0 +1,10 @@
i have a compiler Im writing in elixir. I need to trace execution of logic to pinpoint bugs.
I have many tests I want to debug individually but I can run only the full test suite.
I want to create a couple of macros/functions that'd enable me to debug my code.
the scenario I imagine:
before the test I want to debug I write `Tdd.Debug.enable` and then :
as the code is being executed, it prints out a tree of called functions, their arguments and return values, effectively building a stacktrace as its being executed.
first lets design how we'd print this information and if/what should we store in memory to be able to render it in a shape sufficient for good debugging.
Then lets design an elixir module that compiler modules would 'use'
e.g. a Tdd.Debug module that'd have and override for elixir `def`.

175
fixpoint.md Normal file
View File

@ -0,0 +1,175 @@
You are absolutely right. My apologies. I was so focused on the coinductive base case that I missed the more subtle issue your "FIX" comment was hinting at. The change I proposed (`Store.true_node_id()`) is a common approach, but your codebase has a different, equally valid coinductive mechanism that I overlooked.
Let's analyze your version of `do_simplify` and the remaining failures. Your implementation is actually very clever.
### Deconstructing Your `do_simplify`
Your version has two key features that differ from a standard implementation:
1. **Coinductive Base Case:** `if MapSet.member?(context, current_state), do: Store.false_node_id()`. This sets the base case for a recursive loop to `false` (or `none`). This is the standard for **Greatest Fixed-Point (GFP)** semantics. It's used for proving properties on potentially infinite structures (like streams) where you must show that no "bad" prefix exists. An empty loop means the property holds vacuously.
2. **Implication Logic:**
```elixir
assumptions_imply_true ->
do_simplify(y, sorted_assumptions, new_context)
```
This is the critical part. When the current set of assumptions *already implies* that the variable `var` must be true, you correctly follow the `yes` path **without adding `{var, true}` to the assumption set**. This prevents the assumption set from growing with redundant information, which is essential for the `context` check (`MapSet.member?`) to detect a true loop.
### The Real Root Cause
The problem is the mismatch between the type of recursion we have and the type of coinductive proof we're using.
* **`list_of(E)`:** This is defined with a union (`[] | cons(...)`). This is a **Least Fixed-Point (LFP)** type. It describes finite structures built from the ground up. To prove `x` is in `list_of(E)`, you need to show it can be constructed in a finite number of steps. A coinductive proof for this should have a base case of **TRUE**.
* **Our `do_simplify`:** It uses a GFP strategy (`base case = FALSE`). This is why it's failing.
**Why did changing the base case to `TRUE` not work in my previous suggestion?**
Because your `is_subtype?` check is `A & !B == none`. The `!` negation flips the logic. `!list_of(E)` is no longer a simple LFP type. It becomes a complex predicate involving both intersections and unions. When `simplify` tries to unroll `!list_of(E)`, it's working with a GFP-like structure. My previous diagnosis was correct in spirit but missed how negation interacts with your proof strategy.
**Let's analyze the failures with this new understanding:**
1. **`FAIL: list_of(supertype) </: list_of(subtype)` gets `true`**
* We check `list_of(int) & !list_of(1) == none`.
* The algorithm tries to prove this intersection is empty. It unrolls the recursion.
* It enters a loop while checking the `tail` against the `tail`.
* The loop `{tdd_id, assumptions}` is detected.
* `do_simplify` returns `Store.false_node_id()`.
* This `false` means the intersection is empty, so `is_subtype?` returns `true`. This is incorrect. The intersection should be non-empty.
2. **`FAIL: cons(atom, []) <: list_of(atom)` gets `false`**
* We check `cons(atom,[]) & !list_of(atom) == none`.
* The algorithm correctly follows the path for `cons(atom, [])`.
* It checks this against the `!list_of(atom)` TDD.
* Because `!list_of(atom)` is a complex, negated recursive type, our simple GFP solver isn't powerful enough to fully reduce the graph to `none`. It terminates with a non-`none` graph.
* `is_subtype?` returns `false`. This is incorrect.
### The Correct and Final Solution
The problem is that a single `simplify` function with a fixed coinductive strategy (`GFP` or `LFP`) cannot handle both positive and negative recursive types correctly. The standard solution in advanced model checkers and type systems is to have two separate functions, often called `Check` (for GFP) and `Unfold` (for LFP).
However, we can simulate this with a single, elegant change. We can parameterize `do_simplify` to tell it which fixed-point semantic to use.
Let's modify `Algo.simplify` and `Tdd.Compiler`'s subtyping check.
**Step 1: Parameterize `do_simplify`**
In `lib/tdd/algo.ex`:
```elixir
defmodule Tdd.Algo do
# ...
# The public simplify function is unchanged. It defaults to GFP (:gfp).
@spec simplify(non_neg_integer(), map()) :: non_neg_integer()
def simplify(tdd_id, assumptions \\ %{}) do
simplify(tdd_id, assumptions, :gfp) # Default to Greatest Fixed-Point
end
# NEW: Public function with an explicit fixed-point strategy
@spec simplify(non_neg_integer(), map(), :gfp | :lfp) :: non_neg_integer()
def simplify(tdd_id, assumptions, fp_strategy) do
sorted_assumptions = Enum.sort(assumptions)
# The cache key MUST include the strategy.
cache_key = {:simplify, tdd_id, sorted_assumptions, fp_strategy}
case Store.get_op_cache(cache_key) do
{:ok, result_id} ->
result_id
:not_found ->
result_id = do_simplify(tdd_id, sorted_assumptions, MapSet.new(), fp_strategy)
Store.put_op_cache(cache_key, result_id)
result_id
end
end
# The private helper now takes the fixed-point strategy.
defp do_simplify(tdd_id, sorted_assumptions, context, fp_strategy) do
current_state = {tdd_id, sorted_assumptions} # Context is strategy-independent
if MapSet.member?(context, current_state) do
# --- THIS IS THE FIX ---
# Choose the base case based on the requested strategy.
case fp_strategy do
:gfp -> Store.false_node_id() # Greatest Fixed-Point (intersection-like)
:lfp -> Store.true_node_id() # Least Fixed-Point (union-like)
end
else
new_context = MapSet.put(context, current_state)
assumptions = Map.new(sorted_assumptions)
# The rest of the function passes the strategy down in recursive calls.
if Engine.check(assumptions) == :contradiction do
Store.false_node_id()
else
case Store.get_node(tdd_id) do
{:ok, :true_terminal} -> Store.true_node_id()
{:ok, :false_terminal} -> Store.false_node_id()
{:ok, {var, y, n, d}} ->
case Map.get(assumptions, var) do
true ->
do_simplify(y, sorted_assumptions, new_context, fp_strategy)
false ->
do_simplify(n, sorted_assumptions, new_context, fp_strategy)
:dc ->
do_simplify(d, sorted_assumptions, new_context, fp_strategy)
nil ->
assumptions_imply_true = # ... (logic is the same)
assumptions_imply_false = # ... (logic is the same)
cond do
# ...
assumptions_imply_true ->
do_simplify(y, sorted_assumptions, new_context, fp_strategy)
assumptions_imply_false ->
do_simplify(n, sorted_assumptions, new_context, fp_strategy)
true ->
s_y = do_simplify(y, Enum.sort(Map.put(assumptions, var, true)), new_context, fp_strategy)
s_n = do_simplify(n, Enum.sort(Map.put(assumptions, var, false)), new_context, fp_strategy)
s_d = do_simplify(d, Enum.sort(Map.put(assumptions, var, :dc)), new_context, fp_strategy)
Store.find_or_create_node(var, s_y, s_n, s_d)
end
end
end
end
end
end
end
```
**Step 2: Update the Subtyping Check**
Now, we update the test helper `do_is_subtype` to use the correct strategy. The check `A <: B` is equivalent to `A & !B == none`. The type `A` is an LFP type, so we need to use LFP semantics to see if it's a member of the set defined by `!B`.
In `TddCompilerRecursiveTests` and `CompilerAlgoTests`:
```elixir
# In the test modules
defp do_is_subtype(spec1, spec2) do
id1 = Compiler.spec_to_id(spec1)
id2 = Compiler.spec_to_id(spec2)
# A <: B <=> A & ~B == none
# This is equivalent to asking: Is there any element in A that is NOT in B?
intersect_id = Algo.apply(:intersect, &op_intersect_terminals/2, id1, Algo.negate(id2))
# --- THIS IS THE FIX ---
# To prove that the intersection is empty, we must simplify it.
# Since `spec1` (the type we are checking) is an LFP type, we should
# simplify the intersection using LFP semantics. This will correctly
# "unfold" the type and check for membership in the negated type.
final_id = Algo.simplify(intersect_id, %{}, :lfp)
final_id == Store.false_node_id()
end
```
By explicitly telling `simplify` to use `:lfp` semantics for the subtyping check, you are instructing it to use the `Store.true_node_id()` base case, which correctly handles the "productive" recursion of `list_of` types. This will resolve both remaining test failures.

3811
new.exs Normal file

File diff suppressed because it is too large Load Diff

243
output.md Normal file
View File

@ -0,0 +1,243 @@
--- Running Tdd.TypeSpec.normalize/1 Tests ---
--- Section: Base & Simple Types ---
[PASS] Normalizing :any is idempotent
[PASS] Normalizing :none is idempotent
[PASS] Normalizing :atom is idempotent
[PASS] Normalizing a literal is idempotent
--- Section: Double Negation ---
[PASS] ¬(¬atom) simplifies to atom
[PASS] A single negation is preserved
[PASS] ¬(¬(¬atom)) simplifies to ¬atom
--- Section: Union Normalization ---
[PASS] Flattens nested unions
[PASS] Sorts members of a union
[PASS] Removes duplicates in a union
[PASS] Simplifies a union with :none (A | none -> A)
[PASS] Simplifies a union with :any (A | any -> any)
[PASS] An empty union simplifies to :none
[PASS] A union containing only :none simplifies to :none
[PASS] A union of a single element simplifies to the element itself
--- Section: Intersection Normalization ---
[PASS] Flattens nested intersections
[PASS] Sorts members of an intersection
[PASS] Removes duplicates in an intersection
[PASS] Simplifies an intersection with :any (A & any -> A)
[PASS] Simplifies an intersection with :none (A & none -> none)
[PASS] An empty intersection simplifies to :any
[PASS] An intersection of a single element simplifies to the element itself
--- Section: Recursive Normalization ---
[PASS] Recursively normalizes elements in a tuple
[PASS] Recursively normalizes head and tail in a cons
[PASS] Recursively normalizes element in list_of
[PASS] Recursively normalizes sub-spec in negation
--- Section: Complex Nested Cases ---
[PASS] Handles complex nested simplifications correctly
✅ All TypeSpec tests passed!
--- Running Tdd.Store Tests ---
--- Section: Initialization and Terminals ---
[PASS] true_node_id returns 1
[PASS] false_node_id returns 0
[PASS] get_node for ID 1 returns true_terminal
[PASS] get_node for ID 0 returns false_terminal
[PASS] get_node for unknown ID returns not_found
--- Section: Node Creation and Structural Sharing ---
[PASS] First created node gets ID 2
[PASS] get_node for ID 2 returns the correct tuple
[PASS] Second created node gets ID 3
[PASS] Attempting to create an existing node returns the same ID (Structural Sharing)
[PASS] Next new node gets the correct ID (4)
--- Section: Basic Reduction Rule ---
[PASS] A node with identical children reduces to the child's ID
--- Section: Caching ---
[PASS] Cache is initially empty for a key
[PASS] Cache returns the stored value after put
[PASS] Cache can be updated
✅ All Tdd.Store tests passed!
--- Running Tdd.Variable Tests ---
--- Section: Variable Structure ---
[PASS] v_is_atom returns correct tuple
[PASS] v_atom_eq returns correct tuple
[PASS] v_int_lt returns correct tuple
[PASS] v_tuple_size_eq returns correct tuple
[PASS] v_tuple_elem_pred nests a variable correctly
[PASS] v_list_is_empty returns correct tuple
[PASS] v_list_head_pred nests a variable correctly
--- Section: Global Ordering (Based on Elixir Term Comparison) ---
[PASS] Primary type var < Atom property var
[PASS] Integer :lt var < Integer :eq var
[PASS] Integer :eq var < Integer :gt var
[PASS] Integer :eq(5) var < Integer :eq(10) var
[PASS] Tuple elem(0) var < Tuple elem(1) var
[PASS] Tuple elem(0, atom) var < Tuple elem(0, int) var
Variable.v_list_is_empty(): {5, :b_is_empty, nil, nil}
[PASS] List :b_is_empty var < List :c_head var
[PASS] List :c_head var < List :tail var
✅ All Tdd.Variable tests passed!
--- Running Tdd.Algo & Tdd.Consistency.Engine Tests ---
--- Section: Algo.negate ---
[PASS] negate(true) is false
[PASS] negate(false) is true
[PASS] negate(negate(t_atom)) is t_atom
--- Section: Algo.apply (raw structural operations) ---
[PASS] Structure of 'atom | int' is correct
[PASS] :foo & :bar (raw) is not the false node
--- Section: Algo.simplify (with Consistency.Engine) ---
[PASS] Simplifying under contradictory assumptions (atom & int) results in false
[PASS] Simplifying 'integer' given 'value==:foo' results in false
[PASS] Simplifying 'atom & int' results in false
[PASS] Simplifying 'atom | int' given 'is_atom==true' results in true
[PASS] Simplifying 'atom | int' given 'is_atom==false' results in 'integer'
✅ All Tdd.Algo tests passed!
--- Running Tdd.Consistency.Engine Tests ---
--- Section: Basic & Implication Tests ---
[PASS] An empty assumption map is consistent
[PASS] A single valid assumption is consistent
[PASS] An implied contradiction is caught by expander
[PASS] An implied contradiction is caught by expander
[PASS] Implication creates a consistent set
--- Section: Primary Type Exclusivity ---
[PASS] Two primary types cannot both be true
[PASS] Two primary types implied to be true is a contradiction
[PASS] One primary type true and another false is consistent
--- Section: Atom Consistency ---
[PASS] An atom cannot equal two different values
[PASS] An atom can equal one value
--- Section: List Flat Consistency ---
[PASS] A list cannot be empty and have a head property
[PASS] A non-empty list can have a head property
[PASS] A non-empty list is implied by head property
--- Section: Integer Consistency ---
[PASS] int == 5 is consistent
[PASS] int == 5 AND int == 10 is a contradiction
[PASS] int < 10 AND int > 20 is a contradiction
[PASS] int > 5 AND int < 4 is a contradiction
[PASS] int > 5 AND int < 7 is consistent
[PASS] int == 5 AND int < 3 is a contradiction
[PASS] int == 5 AND int > 10 is a contradiction
[PASS] int == 5 AND int > 3 is consistent
✅ All Consistency.Engine tests passed!
--- Running Tdd.TypeReconstructor Tests ---
--- Section: Basic Flat Reconstructions ---
[PASS] is_atom=true -> atom
[PASS] is_atom=false -> ¬atom
[PASS] is_atom=true AND value==:foo -> :foo
[PASS] is_atom=true AND value!=:foo -> atom & ¬:foo
[PASS] is_integer=true AND int==5 -> 5
[PASS] is_list=true AND is_empty=true -> []
--- Section: Combined Flat Reconstructions ---
[PASS] int > 10 AND int < 20
--- Section: Recursive Reconstructions ---
[PASS] head is an atom
✅ All TypeReconstructor tests passed!
--- Running Compiler & Algo Integration Tests ---
--- Section: Basic Equivalences ---
[PASS] atom & any == atom
[PASS] atom | none == atom
[PASS] atom & int == none
[PASS] ¬(¬atom) == atom
[PASS] atom | atom == atom
--- Section: Basic Subtyping ---
[PASS] :foo <: atom
[PASS] atom <: :foo
[PASS] :foo <: integer
[PASS] int==5 <: integer
[PASS] none <: atom
[PASS] atom <: any
--- Section: Integer Range Logic ---
[PASS] range(7..8) <: range(5..10)
[PASS] range(5..10) <: range(7..8)
[PASS] range(5..10) <: range(15..20)
[PASS] range(5..10) & range(7..8) == range(7..8)
[PASS] range(5..10) & range(0..100) == range(5..10)
[PASS] range(5..10) | range(7..8) == range(5..10)
--- Section: Contradictions & Simplifications ---
[PASS] atom & integer
[PASS] :foo & :bar
[PASS] atom & (int==5)
[PASS] range(5..10) & range(15..20)
[PASS] integer & ¬integer
--- Section: Subtype Reduction Logic ---
[PASS] (:foo | :bar | atom) simplifies to atom
[PASS] (range(5..10) | integer) simplifies to integer
[PASS] (:foo & atom) simplifies to :foo
[PASS] (range(5..10) & integer) simplifies to range(5..10)
--- Section: Logical Laws ---
[PASS] De Morgan's (¬(A|B) == ¬A & ¬B) holds
[PASS] De Morgan's (¬(A&B) == ¬A | ¬B) holds
[PASS] Distributive Law (A & (B|C)) holds
✅ All Compiler & Algo Integration tests passed!
--- Running Tdd.Compiler Recursive Type Tests ---
--- Section: :cons ---
[PASS] :cons is a subtype of :list
[PASS] :cons is not a subtype of the empty list
[PASS] cons(integer, list) is a subtype of cons(any, any)
[PASS] cons(any, any) is not a subtype of cons(integer, list)
--- Section: :tuple ---
[PASS] {:tuple, [atom, int]} is a subtype of :tuple
[PASS] {:tuple, [atom, int]} is not a subtype of :list
[PASS] a tuple of size 2 is not a subtype of a tuple of size 3
[PASS] subtype check works element-wise (specific <: general)
[PASS] subtype check works element-wise (general </: specific)
[PASS] subtype check works element-wise (specific </: unrelated)
--- Section: :list_of ---
[PASS] list_of(E) is a subtype of list
[PASS] empty list is a subtype of any list_of(E)
[PASS] list_of(subtype) is a subtype of list_of(supertype)
[FAIL] list_of(supertype) is not a subtype of list_of(subtype)
Expected: false
Got: true
[PASS] a list with a wrong element type is not a subtype of list_of(E)
[FAIL] a list with a correct element type is a subtype of list_of(E)
Expected: true
Got: false
--- Section: Equivalence ---
[PASS] the recursive definition holds: list_of(E) == [] | cons(E, list_of(E))
[PASS] intersection of two list_of types works correctly

284
refactor.md Normal file
View File

@ -0,0 +1,284 @@
## Motivation for Architectural Evolution of the TDD-based Type System
**To:** Project Stakeholders & Technical Team
**From:** System Architect
**Date:** October 26, 2023
**Subject:** A Principled Path Forward for Stability, Extensibility, and Polymorphism
### 1. Introduction & Current Success
The existing Ternary Decision Diagram (TDD) implementation has been remarkably successful in proving the core value of a set-theoretic approach to types. By representing types as canonical graphs, we have achieved powerful and efficient operations for union, intersection, negation, and subtyping. The system's ability to automatically simplify complex type expressions like `(atom | tuple) & (tuple | :foo)` into a canonical form `(tuple | :foo)` is a testament to the power of this model.
However, as we scale the system to represent more complex, real-world data structures—specifically recursive types like `list(X)`—a foundational architectural issue has been identified. This issue currently compromises the system's correctness and severely limits its future potential.
This document motivates a series of architectural changes designed to address this issue, creating a robust foundation for future growth, including the long-term goal of supporting polymorphism.
### 2. The Core Challenge: The Unstable Variable Problem
The correctness of an Ordered TDD relies on a **fixed, global, total order of all predicate variables**. Our current implementation for recursive types like `list_of(X)` violates this principle.
- **The Flaw:** We currently construct predicate variables by embedding the integer TDD ID of the element type `X`. For instance, `list_of(atom)` might use the variable `{5, :all_elements, 4}` if `type_atom()` resolves to TDD ID `4`.
- **The Consequence:** TDD IDs are artifacts of construction order; they are not stable. If `type_integer()` is created before `type_atom()`, their IDs will swap, and the global variable order will change. This means that **semantically identical types can produce structurally different TDDs**, breaking the canonical representation that is the central promise of our system. This is not a theoretical concern; it is a critical bug that invalidates equivalence checks and subtyping operations under different run conditions.
This instability is a symptom of a deeper issue: we are conflating a type's **logical description** with its **compiled, set-theoretic representation**.
### 3. The Proposed Solution: A Separation of Concerns
To solve this, we will introduce a principled separation between the "what" and the "how" of our types.
1. **`TypeSpec`: The Logical "What"**
We will introduce a new, stable, declarative data structure called a `TypeSpec`. A `TypeSpec` is a structural description of a type (e.g., `{:list_of, :atom}`, `{:union, [:integer, :atom]}`). This becomes the **new public API and canonical language** for defining types. It is human-readable, stable, and completely independent of the TDD implementation.
2. **TDD ID: The Compiled "How"**
The TDD graph and its ID will be treated as a **compiled artifact**. It is the powerful, efficient, set-theoretic representation of a concrete `TypeSpec`. The `Tdd` module will evolve into a low-level "backend" or "compiler engine."
This separation yields immediate and long-term benefits:
- **Guaranteed Correctness & Stability:** By using the stable `TypeSpec` within our TDD variables (e.g., `{5, :all_elements, {:base, :atom}}`), we restore a fixed global variable order. This guarantees that our TDDs are once again truly canonical, ensuring the reliability of all type operations.
- **Clean Architecture & API:** The public-facing API will be vastly improved, operating on clear, declarative `TypeSpec`s instead of opaque integer IDs. The internal complexity of the TDD machinery is encapsulated behind a new `Compiler` module, improving maintainability.
- **Unlocking Future Capabilities:** This architecture is not just a bug fix; it is an enabling refactor. A system that "compiles" a `TypeSpec` to a TDD is naturally extensible to polymorphism. The `TypeSpec` for `list(A)` can be represented as `{:list_of, {:type_var, :A}}`. A higher-level type inference engine can then operate on these specs, substituting variables and using our TDD compiler for concrete subtype checks. This provides a clear, principled path to our goal of supporting polymorphic functions and data structures.
### 4. Phased Implementation Plan
We will roll out these changes in three deliberate phases to manage complexity and deliver value incrementally.
- **Phase 1: Stabilize the Core.** Introduce `TypeSpec` and the `Compiler` to fix the unstable variable problem. This is the highest priority, as it ensures the correctness of our existing feature set.
- **Phase 2: Refine the Public API.** Build a new, high-level `TypeSystem` module that operates exclusively on `TypeSpec`s, providing a clean and intuitive interface for users of our library.
- **Phase 3: Introduce Polymorphism.** With the stable foundation in place, implement the unification and substitution logic required to handle `TypeSpec`s containing type variables, enabling the checking of polymorphic function schemas.
### 5. Conclusion
The current TDD system is a powerful proof of concept that is on the verge of becoming a robust, general-purpose tool. By addressing the foundational "unstable variable" problem through a principled separation of a type's specification from its implementation, we not only fix a critical bug but also pave a clear and logical path toward our most ambitious goals. This architectural evolution is a necessary investment in the long-term stability, correctness, and extensibility of our type system.
### Phase 1: Fix the Core Instability & Introduce `TypeSpec`
This phase is non-negotiable. It makes your current system sound and stable.
**1. Introduce the `TypeSpec` Data Structure:**
This will be the new "source of truth" for defining types. It's a stable, structural representation. This becomes the primary vocabulary for your public API.
```elixir
# In a new file, e.g., typespec.ex
defmodule Tdd.TypeSpec do
@typedoc """
A stable, structural representation of a type.
This is the primary way users and the higher-level system will describe types.
"""
@type t ::
# Base Types
:any
| :none
| :atom
| :integer
| :tuple # any tuple
| :list # any list
# Literal Types (canonical form of a value)
| {:literal, term()}
# Set-theoretic Combinators
| {:union, [t()]}
| {:intersect, [t()]}
| {:negation, t()}
# Parameterized Structural Types
| {:tuple, [t()]} # A tuple with specific element types
| {:cons, t(), t()} # A non-empty list [H|T]
| {:list_of, t()} # A list of any length where elements are of a type
# Integer Range (example of specific properties)
| {:integer_range, integer() | :neg_inf, integer() | :pos_inf}
# Polymorphic Variable (for Phase 3)
| {:type_var, atom()}
end
```
**2. Create a "Compiler": `Tdd.Compiler.spec_to_id/1`**
This is the new heart of your system. It's a memoized function that takes a `TypeSpec` and returns a canonical TDD ID. Your existing `Tdd` module becomes the "backend" for this compiler.
```elixir
# In a new file, e.g., compiler.ex
defmodule Tdd.Compiler do
# This module manages the cache from spec -> id
# Process.put(:spec_to_id_cache, %{})
def spec_to_id(spec) do
# 1. Normalize the spec (e.g., sort unions) to improve cache hits.
normalized_spec = Tdd.TypeSpec.normalize(spec)
# 2. Check cache for `normalized_spec`. If found, return ID.
# 3. If not found, compile it by calling private helpers.
id =
case normalized_spec do
:atom -> Tdd.type_atom()
:integer -> Tdd.type_integer()
{:literal, val} -> build_literal_tdd(val)
{:union, specs} ->
ids = Enum.map(specs, &spec_to_id/1)
Enum.reduce(ids, Tdd.type_none(), &Tdd.sum/2)
# ... and so on ...
{:list_of, element_spec} -> build_list_of_tdd(element_spec)
end
# 4. Cache and return the new ID.
id
end
# The key fix is here:
defp build_list_of_tdd(element_spec) do
# A) Compile the element spec to get its ID for semantic checks.
element_id = spec_to_id(element_spec)
# B) Create the TDD variable using the STABLE element_spec.
# This is the critical change.
stable_var = Tdd.v_list_all_elements_are(element_spec)
# C) Use Tdd module to build the node, passing both the stable
# variable and the compiled ID for the consistency checker.
# (This requires a small change in the Tdd module).
Tdd.type_list_of_internal(stable_var, element_id)
end
end
```
**3. Modify the `Tdd` Module to be a "Low-Level Backend":**
The public API of `Tdd` shrinks. Its main users are now `Tdd.Compiler` and `Tdd.TypeSystem` (Phase 2).
* **Change Variable Constructors:** `v_list_all_elements_are` now takes a `TypeSpec`.
```elixir
# In Tdd module
# The variable now contains the stable TypeSpec, not a volatile ID.
def v_list_all_elements_are(element_spec), do: {5, :all_elements, element_spec}
```
* **Update `check_assumptions_consistency`:** This function now needs access to the compiler to handle the new variable format.
```elixir
# In Tdd module, inside the consistency checker
# ... when it encounters a `v_list_all_elements_are` assumption...
{{5, :all_elements, element_spec}, true} ->
# It needs to know the TDD ID for this spec to do subtyping checks.
ambient_type_for_head = Tdd.Compiler.spec_to_id(element_spec)
# ... rest of the logic proceeds as before ...
```
This creates a circular dependency (`Compiler` -> `Tdd` -> `Compiler`), which is a sign of deep coupling. It's acceptable for now, but a sign that this logic might eventually belong in the compiler itself.
* **Refactor Type Constructors:** Your old `type_list_of`, `type_tuple`, etc., are either removed or renamed to `_internal` and used by the new `Compiler`.
---
### Phase 2: Build the High-Level Type System API
This is the new public-facing interface. It operates on `TypeSpec`s and uses the `Compiler` and `Tdd` modules as its engine.
```elixir
# In a new file, e.g., type_system.ex
defmodule TypeSystem do
alias Tdd.TypeSpec, as: Spec
# --- Public Type Constructors (using TypeSpecs) ---
def type_atom, do: :atom
def type_integer, do: :integer
def type_union(specs), do: {:union, specs}
def type_list_of(spec), do: {:list_of, spec}
# ... and so on ...
# --- Public Operations ---
@doc """
Checks if spec1 is a subtype of spec2.
"""
def is_subtype(spec1, spec2) do
# 1. Compile both specs to their canonical TDD IDs.
id1 = Tdd.Compiler.spec_to_id(spec1)
id2 = Tdd.Compiler.spec_to_id(spec2)
# 2. Use the fast, low-level TDD check.
Tdd.is_subtype(id1, id2)
end
def intersect(spec1, spec2) do
# This is a choice:
# Option A (easy): Just return {:intersect, [spec1, spec2]} and let the compiler
# handle it lazily when `spec_to_id` is called.
# Option B (better): Do some immediate simplifications here before compiling.
# e.g., intersect(:atom, :integer) -> :none
{:intersect, [spec1, spec2]}
end
end
```
**At the end of Phase 2, you have a sound, stable, and extensible set-theoretic type checker.** The "god function" problem still exists in `check_assumptions_consistency`, but the system is no longer fundamentally broken.
---
### Phase 3: Introduce Polymorphism
Now you leverage the `TypeSpec` architecture.
**1. Define Polymorphic Function Schemas:**
These are data structures, living outside the TDD world.
```elixir
# A function schema
@type fun_schema :: {:function,
forall: [:atom], # list of quantified type variables
args: [Spec.t()], # argument types (can use quantified vars)
return: Spec.t() # return type (can use quantified vars)
}
# Example: Enum.map/2
# forall A, B. (list(A), (A -> B)) -> list(B)
map_schema = {:function,
forall: [:A, :B],
args: [
{:list_of, {:type_var, :A}},
{:function, forall: [], args: [{:type_var, :A}], return: {:type_var, :B}}
],
return: {:list_of, {:type_var, :B}}
}
```
**2. Implement a Unification/Substitution Engine:**
This is a new, separate module that operates *only on `TypeSpec`s and schemas*.
```elixir
defmodule Type.Unify do
# Takes a spec with variables and a concrete spec, returns a substitution map
# or :error.
# unify({:list_of, {:type_var, :A}}, {:list_of, :integer}) -> {:ok, %{A: :integer}}
def unify(polymorphic_spec, concrete_spec) do
# ... recursive pattern matching logic ...
end
# Applies a substitution map to a spec.
# substitute({:list_of, {:type_var, :A}}, %{A: :integer}) -> {:list_of, :integer}
def substitute(spec, substitution_map) do
# ... recursive substitution logic ...
end
end
```
**3. Build the Type Checker for Function Calls:**
This is the final piece that ties everything together.
```elixir
# In TypeSystem module
def check_function_call(function_schema, concrete_arg_specs) do
# 1. Unify the schema's arg specs with the concrete arg specs.
# This will produce a substitution map, e.g., %{A: :integer, B: :atom}.
{:ok, substitution_map} = Unify.unify(function_schema.args, concrete_arg_specs)
# 2. Check that the concrete args are valid subtypes.
# (This step is actually part of unification itself).
# 3. Infer the concrete return type by substituting into the schema's return spec.
concrete_return_spec = Unify.substitute(function_schema.return, substitution_map)
{:ok, concrete_return_spec}
end
```
This phased approach solves your immediate problem correctly, provides a clean public API, and builds the exact foundation needed to add a polymorphic layer on top without requiring another major architectural upheaval.

308
scratch.exs Normal file
View File

@ -0,0 +1,308 @@
defmodule Tdd.Debug do
@moduledoc """
Provides macros to wrap `def` and `defp` for simple function call/return tracing.
Logs arguments as a list and return values using `IO.inspect`.
"""
# --- Agent for Tracing State ---
@agent_name Tdd.Debug.StateAgent
def init_agent_if_needed do
case Process.whereis(@agent_name) do
nil -> Agent.start_link(fn -> MapSet.new() end, name: @agent_name)
_pid -> :ok
end
:ok
end
def add_traced_pid(pid) when is_pid(pid) do
init_agent_if_needed()
Agent.update(@agent_name, &MapSet.put(&1, pid))
end
def remove_traced_pid(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil -> :ok
agent_pid -> Agent.cast(agent_pid, fn state -> MapSet.delete(state, pid) end)
end
end
def is_pid_traced?(pid) when is_pid(pid) do
case Process.whereis(@agent_name) do
nil ->
false
agent_pid ->
try do
Agent.get(agent_pid, &MapSet.member?(&1, pid), :infinity)
rescue
_e in [Exit, ArgumentError] -> false
end
end
end
# --- Tracing Control Functions ---
@doc "Enables function call tracing for the current process."
def enable_tracing do
pid_to_trace = self()
add_traced_pid(pid_to_trace)
ref = Process.monitor(pid_to_trace)
Process.spawn(fn ->
receive do
{:DOWN, ^ref, :process, ^pid_to_trace, _reason} ->
remove_traced_pid(pid_to_trace)
after
3_600_000 -> # 1 hour safety timeout
remove_traced_pid(pid_to_trace)
end
end, [:monitor])
:ok
end
@doc "Disables function call tracing for the current process."
def disable_tracing do
remove_traced_pid(self())
:ok
end
@doc "Runs the given 0-arity function with tracing enabled, then disables it."
def run(fun) when is_function(fun, 0) do
enable_tracing()
try do
fun.()
after
disable_tracing()
end
end
# --- Process Dictionary for Call Depth ---
defp get_depth, do: Process.get(:tdd_debug_depth, 0)
def increment_depth do
new_depth = get_depth() + 1
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
def decrement_depth do
new_depth = max(0, get_depth() - 1)
Process.put(:tdd_debug_depth, new_depth)
new_depth
end
# --- Core Macro Logic ---
defmacro __using__(_opts) do
quote do
import Kernel, except: [def: 1, def: 2, defp: 1, defp: 2]
import Tdd.Debug
require Tdd.Debug
end
end
@doc false
defmacro def(call, clauses \\ Keyword.new()) do
generate_traced_function(:def, call, clauses, __CALLER__)
end
@doc false
defmacro defp(call, clauses \\ Keyword.new()) do
generate_traced_function(:defp, call, clauses, __CALLER__)
end
defp is_simple_variable_ast?(ast_node) do
case ast_node do
{var_name, _meta, _context} when is_atom(var_name) ->
var_name != :_
_ -> false
end
end
defp generate_traced_function(type, call_ast, clauses, caller_env) do
require Macro # Good practice
{function_name_ast, meta_call, original_args_patterns_ast_nullable} = call_ast
original_args_patterns_ast_list = original_args_patterns_ast_nullable || []
original_body_ast =
(if Keyword.keyword?(clauses) do
Keyword.get(clauses, :do, clauses)
else
clauses
end) || quote(do: nil)
mapped_and_generated_vars_tuples =
Enum.map(Enum.with_index(original_args_patterns_ast_list), fn {original_pattern_ast, index} ->
# __td_arg_N__ is for logging, make it hygienic with `nil` context (or __MODULE__)
td_arg_var = Macro.var(String.to_atom("__td_arg_#{index}__"), nil)
{final_pattern_for_head, rhs_for_td_arg_assignment} =
case original_pattern_ast do
# 1. Ignored variable: `_`
# AST: {:_, meta, context_module_or_nil}
{:_, _, _} = underscore_ast ->
{underscore_ast, quote(do: :__td_ignored_argument__)}
# 2. Assignment pattern: `var = pattern` or `var = _`
# AST: {:=, meta, [lhs, rhs_of_assign]}
{:=, _meta_assign, [lhs_of_assign, _rhs_of_assign]} = assignment_pattern_ast ->
if is_simple_variable_ast?(lhs_of_assign) do
{assignment_pattern_ast, lhs_of_assign} # Head uses `var = pattern`, log `var`
else
# LHS is complex (e.g., `%{key: v} = pattern`), capture the whole value.
captured_val_var = Macro.unique_var(String.to_atom("tdc_assign_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(assignment_pattern_ast) end
{new_head_pattern, captured_val_var}
end
# 3. Default argument: `pattern_before_default \\ default_value`
# AST: {:\|, meta, [pattern_before_default, default_value_ast]}
{:\\, _meta_default, [pattern_before_default, _default_value_ast]} = default_arg_pattern_ast ->
cond do
# 3a. `var \\ default`
is_simple_variable_ast?(pattern_before_default) ->
{default_arg_pattern_ast, pattern_before_default}
# 3b. `(var = inner_pattern) \\ default`
match?({:=, _, [lhs_inner_assign, _]}, pattern_before_default) and
is_simple_variable_ast?(pattern_before_default |> elem(2) |> Enum.at(0)) ->
{:=, _, [lhs_inner_assign, _]}= pattern_before_default
# `lhs_inner_assign` is the var on the left of `=`
{default_arg_pattern_ast, lhs_inner_assign}
# 3c. `(complex_pattern) \\ default` or `(_ = inner_pattern) \\ default` etc.
true ->
captured_val_var = Macro.unique_var(String.to_atom("tdc_def_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(default_arg_pattern_ast) end
{new_head_pattern, captured_val_var}
end
# 4. Simple variable `var` (checked using our helper)
# or other complex patterns/literals not caught above.
ast_node ->
if is_simple_variable_ast?(ast_node) do
{ast_node, ast_node} # Head uses `var`, log `var`
else
# It's a complex pattern (e.g., `%{a:x}`, `[h|t]`) or a literal not assignable to.
captured_val_var = Macro.unique_var(String.to_atom("tdc_pat_#{index}"), Elixir)
new_head_pattern = quote do unquote(captured_val_var) = unquote(ast_node) end
{new_head_pattern, captured_val_var}
end
end
assignment_ast = quote do unquote(td_arg_var) = unquote(rhs_for_td_arg_assignment) end
{final_pattern_for_head, assignment_ast, td_arg_var}
end)
{new_args_patterns_for_head_list, assignments_for_logging_vars_ast_list, generated_vars_to_log_asts} =
if mapped_and_generated_vars_tuples == [],
do: {[], [], []},
else: mapped_and_generated_vars_tuples |> Enum.map(&Tuple.to_list(&1)) |> Enum.zip()|> Enum.map(&Tuple.to_list(&1))
|> then(fn [a, b, c] -> {a, b, c} end)
# Enum.unzip(mapped_and_generated_vars_tuples)
new_call_ast = {function_name_ast, meta_call, new_args_patterns_for_head_list}
traced_body_inner_ast =
quote do
unquote_splicing(assignments_for_logging_vars_ast_list)
if Tdd.Debug.is_pid_traced?(self()) do
current_print_depth = Tdd.Debug.increment_depth()
indent = String.duplicate(" ", current_print_depth - 1)
runtime_arg_values = [unquote_splicing(generated_vars_to_log_asts)]
actual_module_name_str = Atom.to_string(unquote(caller_env.module))
# The function_name_ast is resolved at macro expansion time.
# If it's `def foo(...)`, `unquote(function_name_ast)` becomes `:foo`.
# If `def unquote(name_var)(...)`, it resolves `name_var`.
resolved_fn_name = unquote(function_name_ast)
printable_function_name_str =
if is_atom(resolved_fn_name) do
Atom.to_string(resolved_fn_name)
else
Macro.to_string(resolved_fn_name) # For complex names / operators if AST passed
end
IO.puts(
"#{indent}CALL: #{actual_module_name_str}.#{printable_function_name_str}"
)
IO.puts(
"#{indent} ARGS: #{inspect(runtime_arg_values)}"
)
try do
result = unquote(original_body_ast)
_ = Tdd.Debug.decrement_depth()
IO.puts(
"#{indent}RETURN from #{actual_module_name_str}.#{printable_function_name_str}: #{inspect(result)}"
)
result
rescue
exception_class ->
error_instance = exception_class
stacktrace = __STACKTRACE__
_ = Tdd.Debug.decrement_depth()
IO.puts(
"#{indent}ERROR in #{actual_module_name_str}.#{printable_function_name_str}: #{inspect(error_instance)}"
)
reraise error_instance, stacktrace
end
else
unquote(original_body_ast)
end
end
final_definition_ast =
quote location: :keep do
Kernel.unquote(type)(
unquote(new_call_ast),
do: unquote(traced_body_inner_ast)
)
end
final_definition_ast
end
# --- TDD Graph Printing (Kept as it was, not directly related to call tracing simplification) ---
@doc "Prints a formatted representation of a TDD graph structure."
def print_tdd_graph(id, store_module \\ Tdd.Store) do
IO.puts("--- TDD Graph (ID: #{id}) ---")
do_print_tdd_node(id, 0, MapSet.new(), store_module)
IO.puts("------------------------")
end
defp do_print_tdd_node(id, indent_level, visited, store_module) do
prefix = String.duplicate(" ", indent_level)
if MapSet.member?(visited, id) do
IO.puts("#{prefix}ID #{id} -> [Seen, recursive link]")
:ok
else
new_visited = MapSet.put(visited, id)
case store_module.get_node(id) do # Assumes store_module.get_node/1 exists
{:ok, :true_terminal} -> IO.puts("#{prefix}ID #{id} -> TRUE")
{:ok, :false_terminal} -> IO.puts("#{prefix}ID #{id} -> FALSE")
{:ok, {var, y_id, n_id, dc_id}} ->
IO.puts("#{prefix}ID #{id}: IF #{inspect(var)}")
IO.puts("#{prefix} ├─ Yes (to ID #{y_id}):")
do_print_tdd_node(y_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} ├─ No (to ID #{n_id}):")
do_print_tdd_node(n_id, indent_level + 2, new_visited, store_module)
IO.puts("#{prefix} └─ DC (to ID #{dc_id}):")
do_print_tdd_node(dc_id, indent_level + 2, new_visited, store_module)
{:error, reason} ->
IO.puts("#{prefix}ID #{id}: ERROR - #{reason}")
end
end
end
end
defmodule Asd do
use Tdd.Debug
def kekistan(dupsko, {:sommething, _}) do
IO.inspect("inside ")
dupsko + 1
end
end
Process.sleep(100)
Tdd.Debug.enable_tracing()
# Asd.kekistan(1, 2)
Asd.kekistan(1, {:sommething, 2})

1822
test1.exs

File diff suppressed because it is too large Load Diff

796
tests.exs Normal file
View File

@ -0,0 +1,796 @@
# --- Example Usage ---
Tdd.init_tdd_system()
# Basic Types
tdd_foo = Tdd.type_atom_literal(:foo)
tdd_bar = Tdd.type_atom_literal(:bar)
tdd_atom = Tdd.type_atom()
tdd_empty_tuple = Tdd.type_empty_tuple()
tdd_any = Tdd.type_any()
tdd_none = Tdd.type_none()
test = fn name, expected, result ->
current_failures = Process.get(:test_failures, [])
if expected != result do
Process.put(:test_failures, [name | current_failures])
end
status = if expected == result, do: "PASSED", else: "FAILED"
IO.puts("#{name} (Expected: #{expected}) -> Result: #{result} - #{status}")
end
# Basic Types
tdd_foo = Tdd.type_atom_literal(:foo)
tdd_bar = Tdd.type_atom_literal(:bar)
tdd_baz = Tdd.type_atom_literal(:baz)
tdd_atom = Tdd.type_atom()
tdd_empty_tuple = Tdd.type_empty_tuple()
tdd_tuple = Tdd.type_tuple()
# Tuple of size 2, e.g. {any, any}
tdd_tuple_s2 = Tdd.type_tuple_sized_any(2)
tdd_any = Tdd.type_any()
tdd_none = Tdd.type_none()
test_all = fn ->
IO.puts("\n--- TDD for :foo ---")
Tdd.print_tdd(tdd_foo)
IO.puts("\n--- TDD for not :foo ---")
Tdd.print_tdd(Tdd.negate(tdd_foo))
IO.puts("\n--- TDD for atom ---")
Tdd.print_tdd(tdd_atom)
IO.puts("\n--- TDD for not atom ---")
# Expected: make_node(@v_is_atom, @false_node_id, @true_node_id, @true_node_id)
# This represents "anything that is not an atom". The DC branch becomes true because if
# "is_atom" test is irrelevant for "not atom", it means it's part of "not atom".
Tdd.print_tdd(Tdd.negate(tdd_atom))
IO.puts("\n--- TDD for :foo and :bar (should be none) ---")
tdd_foo_and_bar = Tdd.intersect(tdd_foo, tdd_bar)
# Expected ID 0: :false_terminal
Tdd.print_tdd(tdd_foo_and_bar)
IO.puts("\n--- TDD for :foo and atom (should be :foo) ---")
tdd_foo_and_atom = Tdd.intersect(tdd_foo, tdd_atom)
# Expected to be structurally identical to tdd_foo
Tdd.print_tdd(tdd_foo_and_atom)
IO.puts("\n--- Basic Subtyping Tests ---")
test.(":foo <: atom", true, Tdd.is_subtype(tdd_foo, tdd_atom))
test.("atom <: :foo", false, Tdd.is_subtype(tdd_atom, tdd_foo))
test.(":foo <: :bar", false, Tdd.is_subtype(tdd_foo, tdd_bar))
test.(":foo <: :foo", true, Tdd.is_subtype(tdd_foo, tdd_foo))
test.("{} <: tuple", true, Tdd.is_subtype(tdd_empty_tuple, tdd_tuple))
test.("tuple <: {}", false, Tdd.is_subtype(tdd_tuple, tdd_empty_tuple))
test.(":foo <: {}", false, Tdd.is_subtype(tdd_foo, tdd_empty_tuple))
test.("tuple_size_2 <: tuple", true, Tdd.is_subtype(tdd_tuple_s2, tdd_tuple))
test.("tuple <: tuple_size_2", false, Tdd.is_subtype(tdd_tuple, tdd_tuple_s2))
test.("tuple_size_2 <: {}", false, Tdd.is_subtype(tdd_tuple_s2, tdd_empty_tuple))
IO.puts("\n--- Any/None Subtyping Tests ---")
test.("any <: :foo", false, Tdd.is_subtype(tdd_any, tdd_foo))
test.(":foo <: any", true, Tdd.is_subtype(tdd_foo, tdd_any))
test.("none <: :foo", true, Tdd.is_subtype(tdd_none, tdd_foo))
test.(":foo <: none", false, Tdd.is_subtype(tdd_foo, tdd_none))
test.("none <: any", true, Tdd.is_subtype(tdd_none, tdd_any))
test.("any <: none", false, Tdd.is_subtype(tdd_any, tdd_none))
test.("any <: any", true, Tdd.is_subtype(tdd_any, tdd_any))
test.("none <: none", true, Tdd.is_subtype(tdd_none, tdd_none))
IO.puts("\n--- Union related Subtyping ---")
tdd_foo_or_bar = Tdd.sum(tdd_foo, tdd_bar)
tdd_foo_or_bar_or_baz = Tdd.sum(tdd_foo_or_bar, tdd_baz)
test.(":foo <: (:foo | :bar)", true, Tdd.is_subtype(tdd_foo, tdd_foo_or_bar))
test.(":baz <: (:foo | :bar)", false, Tdd.is_subtype(tdd_baz, tdd_foo_or_bar))
test.("(:foo | :bar) <: atom", true, Tdd.is_subtype(tdd_foo_or_bar, tdd_atom))
test.("atom <: (:foo | :bar)", false, Tdd.is_subtype(tdd_atom, tdd_foo_or_bar))
test.(
"(:foo | :bar) <: (:foo | :bar | :baz)",
true,
Tdd.is_subtype(tdd_foo_or_bar, tdd_foo_or_bar_or_baz)
)
test.(
"(:foo | :bar | :baz) <: (:foo | :bar)",
false,
Tdd.is_subtype(tdd_foo_or_bar_or_baz, tdd_foo_or_bar)
)
# Test against a non-member of the union
test.("(:foo | :bar) <: :baz", false, Tdd.is_subtype(tdd_foo_or_bar, tdd_baz))
IO.puts("\n--- Intersection related Subtyping ---")
# Should be equivalent to tdd_foo
tdd_atom_and_foo = Tdd.intersect(tdd_atom, tdd_foo)
# Should be tdd_none
tdd_atom_and_tuple = Tdd.intersect(tdd_atom, tdd_tuple)
test.("(atom & :foo) <: :foo", true, Tdd.is_subtype(tdd_atom_and_foo, tdd_foo))
test.(":foo <: (atom & :foo)", true, Tdd.is_subtype(tdd_foo, tdd_atom_and_foo))
test.("(atom & tuple) <: none", true, Tdd.is_subtype(tdd_atom_and_tuple, tdd_none))
test.("none <: (atom & tuple)", true, Tdd.is_subtype(tdd_none, tdd_atom_and_tuple))
test.("(atom & :foo) <: :bar", false, Tdd.is_subtype(tdd_atom_and_foo, tdd_bar))
# An intersection is a subtype of its components
test.("(atom & :foo) <: atom", true, Tdd.is_subtype(tdd_atom_and_foo, tdd_atom))
# (none <: atom)
test.("(atom & tuple) <: atom", true, Tdd.is_subtype(tdd_atom_and_tuple, tdd_atom))
# (none <: tuple)
test.("(atom & tuple) <: tuple", true, Tdd.is_subtype(tdd_atom_and_tuple, tdd_tuple))
IO.puts("\n--- Negation related Subtyping (Contrapositives) ---")
# Reminder: ¬A <: ¬B is equivalent to B <: A (contrapositive)
# Test 1: ¬atom <: ¬:foo (Equivalent to :foo <: atom, which is true)
test.("¬atom <: ¬:foo", true, Tdd.is_subtype(Tdd.negate(tdd_atom), Tdd.negate(tdd_foo)))
# Test 2: ¬:foo <: ¬atom (Equivalent to atom <: :foo, which is false)
test.("¬:foo <: ¬atom", false, Tdd.is_subtype(Tdd.negate(tdd_foo), Tdd.negate(tdd_atom)))
# Double negation
test.("¬(¬:foo) <: :foo", true, Tdd.is_subtype(Tdd.negate(Tdd.negate(tdd_foo)), tdd_foo))
test.(":foo <: ¬(¬:foo)", true, Tdd.is_subtype(tdd_foo, Tdd.negate(Tdd.negate(tdd_foo))))
# Disjoint types
test.("atom <: ¬tuple", true, Tdd.is_subtype(tdd_atom, Tdd.negate(tdd_tuple)))
test.("tuple <: ¬atom", true, Tdd.is_subtype(tdd_tuple, Tdd.negate(tdd_atom)))
test.(":foo <: ¬{}", true, Tdd.is_subtype(tdd_foo, Tdd.negate(tdd_empty_tuple)))
IO.puts("\n--- Mixed Types & Complex Subtyping ---")
tdd_atom_or_tuple = Tdd.sum(tdd_atom, tdd_tuple)
tdd_foo_or_empty_tuple = Tdd.sum(tdd_foo, tdd_empty_tuple)
test.(
"(:foo | {}) <: (atom | tuple)",
true,
Tdd.is_subtype(tdd_foo_or_empty_tuple, tdd_atom_or_tuple)
)
test.(
"(atom | tuple) <: (:foo | {})",
false,
Tdd.is_subtype(tdd_atom_or_tuple, tdd_foo_or_empty_tuple)
)
test.(":foo <: (atom | tuple)", true, Tdd.is_subtype(tdd_foo, tdd_atom_or_tuple))
test.("{} <: (atom | tuple)", true, Tdd.is_subtype(tdd_empty_tuple, tdd_atom_or_tuple))
# De Morgan's Law illustration (A | B = ¬(¬A & ¬B))
# (:foo | :bar) <: ¬(¬:foo & ¬:bar)
tdd_not_foo_and_not_bar = Tdd.intersect(Tdd.negate(tdd_foo), Tdd.negate(tdd_bar))
test.(
"(:foo | :bar) <: ¬(¬:foo & ¬:bar)",
true,
Tdd.is_subtype(tdd_foo_or_bar, Tdd.negate(tdd_not_foo_and_not_bar))
)
test.(
"¬(¬:foo & ¬:bar) <: (:foo | :bar)",
true,
Tdd.is_subtype(Tdd.negate(tdd_not_foo_and_not_bar), tdd_foo_or_bar)
)
# Type difference: atom - :foo (represented as atom & ¬:foo)
tdd_atom_minus_foo = Tdd.intersect(tdd_atom, Tdd.negate(tdd_foo))
test.("(atom - :foo) <: atom", true, Tdd.is_subtype(tdd_atom_minus_foo, tdd_atom))
test.("(atom - :foo) <: :foo", false, Tdd.is_subtype(tdd_atom_minus_foo, tdd_foo))
# True if :bar is in (atom - :foo)
test.("(atom - :foo) <: :bar", false, Tdd.is_subtype(tdd_atom_minus_foo, tdd_bar))
test.(":bar <: (atom - :foo)", true, Tdd.is_subtype(tdd_bar, tdd_atom_minus_foo))
# (atom - :foo) | :foo should be atom
tdd_recombined_atom = Tdd.sum(tdd_atom_minus_foo, tdd_foo)
test.("((atom - :foo) | :foo) <: atom", true, Tdd.is_subtype(tdd_recombined_atom, tdd_atom))
test.("atom <: ((atom - :foo) | :foo)", true, Tdd.is_subtype(tdd_atom, tdd_recombined_atom))
# (atom | {}) & (tuple | :foo) must be (:foo | {})
# Represents `atom() | {}`
tdd_atom_or_empty = Tdd.sum(tdd_atom, tdd_empty_tuple)
# Represents `tuple() | :foo`
tdd_tuple_or_foo = Tdd.sum(tdd_tuple, tdd_foo)
intersected_complex = Tdd.intersect(tdd_atom_or_empty, tdd_tuple_or_foo)
# Expected result for intersected_complex is tdd_foo_or_empty_tuple
test.(
"(atom | {}) & (tuple | :foo) <: (:foo | {})",
true,
Tdd.is_subtype(intersected_complex, tdd_foo_or_empty_tuple)
)
test.(
"(:foo | {}) <: (atom | {}) & (tuple | :foo)",
true,
Tdd.is_subtype(tdd_foo_or_empty_tuple, intersected_complex)
)
# {} | tuple_size_2 should be a subtype of tuple
tdd_empty_or_s2 = Tdd.sum(tdd_empty_tuple, tdd_tuple_s2)
test.("({} | tuple_size_2) <: tuple", true, Tdd.is_subtype(tdd_empty_or_s2, tdd_tuple))
test.(
"({} | tuple_size_2) <: ({} | tuple_size_2)",
true,
Tdd.is_subtype(tdd_empty_or_s2, tdd_empty_or_s2)
)
test.(
"({} | tuple_size_2) <: tuple_size_2",
false,
Tdd.is_subtype(tdd_empty_or_s2, tdd_tuple_s2)
)
# IO.puts("\n--- TDD structure for (atom - :foo) ---")
# Tdd.print_tdd(tdd_atom_minus_foo)
# IO.puts("\n--- TDD structure for ((atom - :foo) | :foo) which should be 'atom' ---")
# Tdd.print_tdd(tdd_recombined_atom)
# IO.puts("\n--- TDD structure for 'atom' for comparison ---")
# Tdd.print_tdd(tdd_atom)
IO.inspect(Process.get(:test_failures, []))
end
defmodule IntegerTests do
def run(test_fn) do
Process.put(:test_failures, [])
# Reset for each test group if needed, or once globally
Tdd.init_tdd_system()
# Integer types
tdd_int = Tdd.type_integer()
tdd_int_5 = Tdd.type_int_eq(5)
tdd_int_7 = Tdd.type_int_eq(7)
# x < 10
tdd_int_lt_10 = Tdd.type_int_lt(10)
# x > 3
tdd_int_gt_3 = Tdd.type_int_gt(3)
# x < 3
tdd_int_lt_3 = Tdd.type_int_lt(3)
# x > 10
tdd_int_gt_10 = Tdd.type_int_gt(10)
tdd_atom_foo = Tdd.type_atom_literal(:foo)
#
# IO.puts("\n--- Integer Type Structures ---")
# IO.puts("Integer:")
# Tdd.print_tdd(tdd_int)
# IO.puts("Int == 5:")
# Tdd.print_tdd(tdd_int_5)
# IO.puts("Int < 10:")
# Tdd.print_tdd(tdd_int_lt_10)
IO.puts("\n--- Integer Subtyping Tests ---")
test_fn.("int_5 <: integer", true, Tdd.is_subtype(tdd_int_5, tdd_int))
test_fn.("integer <: int_5", false, Tdd.is_subtype(tdd_int, tdd_int_5))
test_fn.("int_5 <: int_7", false, Tdd.is_subtype(tdd_int_5, tdd_int_7))
test_fn.("int_5 <: int_5", true, Tdd.is_subtype(tdd_int_5, tdd_int_5))
test_fn.("int_5 <: atom_foo", false, Tdd.is_subtype(tdd_int_5, tdd_atom_foo))
test_fn.("int_lt_10 <: integer", true, Tdd.is_subtype(tdd_int_lt_10, tdd_int))
test_fn.("integer <: int_lt_10", false, Tdd.is_subtype(tdd_int, tdd_int_lt_10))
# 5 < 10
test_fn.("int_5 <: int_lt_10", true, Tdd.is_subtype(tdd_int_5, tdd_int_lt_10))
test_fn.("int_lt_10 <: int_5", false, Tdd.is_subtype(tdd_int_lt_10, tdd_int_5))
test_fn.("int_gt_3 <: integer", true, Tdd.is_subtype(tdd_int_gt_3, tdd_int))
# 5 > 3
test_fn.("int_5 <: int_gt_3", true, Tdd.is_subtype(tdd_int_5, tdd_int_gt_3))
test_fn.("int_gt_3 <: int_5", false, Tdd.is_subtype(tdd_int_gt_3, tdd_int_5))
# x < 3 implies x < 10
test_fn.("int_lt_3 <: int_lt_10", true, Tdd.is_subtype(tdd_int_lt_3, tdd_int_lt_10))
# x > 10 implies x > 3
test_fn.("int_gt_10 <: int_gt_3", true, Tdd.is_subtype(tdd_int_gt_10, tdd_int_gt_3))
test_fn.("int_lt_10 <: int_lt_3", false, Tdd.is_subtype(tdd_int_lt_10, tdd_int_lt_3))
test_fn.("int_gt_3 <: int_gt_10", false, Tdd.is_subtype(tdd_int_gt_3, tdd_int_gt_10))
IO.puts("\n--- Integer Intersection Tests (should resolve to none for contradictions) ---")
intersect_5_7 = Tdd.intersect(tdd_int_5, tdd_int_7)
test_fn.("int_5 & int_7 == none", true, intersect_5_7 == Tdd.type_none())
# IO.puts("Structure of int_5 & int_7 (should be ID 0):")
# Tdd.print_tdd(intersect_5_7)
# x < 3 AND x > 10
intersect_lt3_gt10 = Tdd.intersect(tdd_int_lt_3, tdd_int_gt_10)
test_fn.("int_lt_3 & int_gt_10 == none", true, intersect_lt3_gt10 == Tdd.type_none())
# IO.puts("Structure of int_lt_3 & int_gt_10 (should be ID 0):")
# Tdd.print_tdd(intersect_lt3_gt10)
# x < 10 AND x > 3 (e.g. 4,5..9)
intersect_lt10_gt3 = Tdd.intersect(tdd_int_lt_10, tdd_int_gt_3)
test_fn.("int_lt_10 & int_gt_3 != none", true, intersect_lt10_gt3 != Tdd.type_none())
# IO.puts("Structure of int_lt_10 & int_gt_3 (should be non-empty):")
# Tdd.print_tdd(intersect_lt10_gt3)
# Test a value within this intersection
test_fn.(
"int_5 <: (int_lt_10 & int_gt_3)",
true,
Tdd.is_subtype(tdd_int_5, intersect_lt10_gt3)
)
# x == 5 AND x < 3
intersect_5_lt3 = Tdd.intersect(tdd_int_5, tdd_int_lt_3)
test_fn.("int_5 & int_lt_3 == none", true, intersect_5_lt3 == Tdd.type_none())
IO.puts("\n--- Integer Union Tests ---")
union_5_7 = Tdd.sum(tdd_int_5, tdd_int_7)
test_fn.("int_5 <: (int_5 | int_7)", true, Tdd.is_subtype(tdd_int_5, union_5_7))
test_fn.("int_7 <: (int_5 | int_7)", true, Tdd.is_subtype(tdd_int_7, union_5_7))
test_fn.("int_lt_3 <: (int_5 | int_7)", false, Tdd.is_subtype(tdd_int_lt_3, union_5_7))
# IO.puts("Structure of int_5 | int_7:")
# Tdd.print_tdd(union_5_7)
# (int < 3) | (int > 10)
union_disjoint_ranges = Tdd.sum(tdd_int_lt_3, tdd_int_gt_10)
test_fn.(
"int_eq(0) <: (int < 3 | int > 10)",
true,
Tdd.is_subtype(Tdd.type_int_eq(0), union_disjoint_ranges)
)
test_fn.(
"int_eq(5) <: (int < 3 | int > 10)",
false,
Tdd.is_subtype(Tdd.type_int_eq(5), union_disjoint_ranges)
)
test_fn.(
"int_eq(12) <: (int < 3 | int > 10)",
true,
Tdd.is_subtype(Tdd.type_int_eq(12), union_disjoint_ranges)
)
IO.inspect(Process.get(:test_failures, []))
end
end
defmodule TupleTests do
import Tdd
def run(test_fn) do
Process.put(:test_failures, [])
# Re-init the system for a clean slate for these tests
Tdd.init_tdd_system()
IO.puts("\n--- Running TupleTests ---")
# --- Basic Types for convenience ---
t_atom = type_atom()
t_int = type_integer()
t_foo = type_atom_literal(:foo)
t_bar = type_atom_literal(:bar)
t_int_5 = type_int_eq(5)
t_int_6 = type_int_eq(6)
t_int_pos = type_int_gt(0)
t_any = type_any()
t_none = type_none()
# any tuple
t_tuple = type_tuple()
t_empty_tuple = type_empty_tuple()
# --- Specific Tuple Types ---
# {atom(), integer()}
tup_atom_int = type_tuple([t_atom, t_int])
# {:foo, 5}
tup_foo_5 = type_tuple([t_foo, t_int_5])
# {pos_integer(), atom()}
tup_pos_atom = type_tuple([t_int_pos, t_atom])
# {atom(), any}
tup_atom_any = type_tuple([t_atom, t_any])
# {any, integer()}
tup_any_int = type_tuple([t_any, t_int])
# a tuple of size 2, {any, any}
tup_s2_any = type_tuple_sized_any(2)
# a tuple of size 3, {any, any, any}
tup_s3_any = type_tuple_sized_any(3)
# {integer(), atom()}
tup_int_atom = type_tuple([t_int, t_atom])
# {{:foo}}
tup_nested_foo = type_tuple([type_tuple([t_foo])])
# {{atom()}}
tup_nested_atom = type_tuple([type_tuple([t_atom])])
# {any, none} -> this should resolve to none
tup_with_none = type_tuple([t_any, t_none])
IO.puts("\n--- Section: Basic Subtyping ---")
test_fn.("{:foo, 5} <: {atom, int}", true, is_subtype(tup_foo_5, tup_atom_int))
test_fn.("{atom, int} <: {:foo, 5}", false, is_subtype(tup_atom_int, tup_foo_5))
test_fn.("{:foo, 5} <: {pos_int, atom}", false, is_subtype(tup_foo_5, tup_pos_atom))
test_fn.("{pos_int, atom} <: {atom, int}", false, is_subtype(tup_pos_atom, tup_atom_int))
test_fn.("{atom, int} <: tuple()", true, is_subtype(tup_atom_int, t_tuple))
test_fn.("tuple() <: {atom, int}", false, is_subtype(t_tuple, tup_atom_int))
IO.puts("\n--- Section: Size-related Subtyping ---")
test_fn.("{atom, int} <: tuple_size_2_any", true, is_subtype(tup_atom_int, tup_s2_any))
test_fn.("tuple_size_2_any <: {atom, int}", false, is_subtype(tup_s2_any, tup_atom_int))
test_fn.("{atom, int} <: tuple_size_3_any", false, is_subtype(tup_atom_int, tup_s3_any))
test_fn.("tuple_size_2_any <: tuple_size_3_any", false, is_subtype(tup_s2_any, tup_s3_any))
test_fn.("{} <: tuple()", true, is_subtype(t_empty_tuple, t_tuple))
test_fn.("{} <: tuple_size_2_any", false, is_subtype(t_empty_tuple, tup_s2_any))
IO.puts("\n--- Section: Intersection ---")
# {atom, any} & {any, int} -> {atom, int}
intersect1 = intersect(tup_atom_any, tup_any_int)
test_fn.("({atom,any} & {any,int}) == {atom,int}", true, intersect1 == tup_atom_int)
# {atom, int} & {int, atom} -> none
intersect2 = intersect(tup_atom_int, tup_int_atom)
test_fn.("({atom,int} & {int,atom}) == none", true, intersect2 == t_none)
# tuple_size_2 & tuple_size_3 -> none
intersect3 = intersect(tup_s2_any, tup_s3_any)
test_fn.("(tuple_size_2 & tuple_size_3) == none", true, intersect3 == t_none)
# tuple() & {atom, int} -> {atom, int}
intersect4 = intersect(t_tuple, tup_atom_int)
test_fn.("(tuple() & {atom,int}) == {atom,int}", true, intersect4 == tup_atom_int)
IO.puts("\n--- Section: Union ---")
# {:foo, 5} | {pos_int, atom}
union1 = sum(tup_foo_5, tup_pos_atom)
test_fn.("{:foo, 5} <: ({:foo, 5} | {pos_int, atom})", true, is_subtype(tup_foo_5, union1))
test_fn.(
"{pos_int, atom} <: ({:foo, 5} | {pos_int, atom})",
true,
is_subtype(tup_pos_atom, union1)
)
test_fn.(
"{atom, int} <: ({:foo, 5} | {pos_int, atom})",
false,
is_subtype(tup_atom_int, union1)
)
# {atom, any} | {any, int} -> a complex type, let's check subtyping against it
union2 = sum(tup_atom_any, tup_any_int)
# {atom, int} is in both parts of the union.
test_fn.("{atom, int} <: ({atom,any} | {any,int})", true, is_subtype(tup_atom_int, union2))
# {:foo, :bar} is only in {atom, any}.
test_fn.(
"{:foo, :bar} <: ({atom,any} | {any,int})",
true,
is_subtype(type_tuple([t_foo, t_bar]), union2)
)
# {5, 6} is only in {any, int}.
test_fn.(
"{5, 6} <: ({atom,any} | {any,int})",
true,
is_subtype(type_tuple([t_int_5, t_int_6]), union2)
)
# {5, :foo} is in neither part of the union.
test_fn.(
"{5, :foo} <: ({atom,any} | {any,int})",
false,
is_subtype(type_tuple([t_int_5, t_foo]), union2)
)
IO.puts("\n--- Section: Negation and Type Difference ---")
# atom is disjoint from tuple, so atom <: ¬tuple
test_fn.("atom <: ¬tuple", true, is_subtype(t_atom, negate(t_tuple)))
# A specific tuple should not be a subtype of the negation of a more general tuple type it belongs to
test_fn.("{atom, int} <: ¬tuple()", false, is_subtype(tup_atom_int, negate(t_tuple)))
# {int, atom} is a subtype of ¬{atom, int} because their elements differ
test_fn.("{int, atom} <: ¬{atom, int}", true, is_subtype(tup_int_atom, negate(tup_atom_int)))
# tuple_size_3 is a subtype of ¬tuple_size_2 because their sizes differ
test_fn.("tuple_size_3 <: ¬tuple_size_2", true, is_subtype(tup_s3_any, negate(tup_s2_any)))
# Type difference: tuple_size_2 - {atom, any} -> should be {¬atom, any} for size 2 tuples.
diff1 = intersect(tup_s2_any, negate(tup_atom_any))
# {integer, integer} has a first element that is not an atom, so it should be in the difference.
tup_int_int = type_tuple([t_int, t_int])
test_fn.("{int, int} <: (tuple_size_2 - {atom, any})", true, is_subtype(tup_int_int, diff1))
test_fn.(
"{atom, int} <: (tuple_size_2 - {atom, any})",
false,
is_subtype(tup_atom_int, diff1)
)
IO.puts("\n--- Section: Nested Tuples ---")
test_fn.("{{:foo}} <: {{atom}}", true, is_subtype(tup_nested_foo, tup_nested_atom))
test_fn.("{{atom}} <: {{:foo}}", false, is_subtype(tup_nested_atom, tup_nested_foo))
# Intersection of disjoint nested types: {{:foo}} & {{:bar}}
intersect_nested = intersect(tup_nested_foo, type_tuple([type_tuple([t_bar])]))
test_fn.("{{:foo}} & {{:bar}} == none", true, intersect_nested == t_none)
# Union of nested types
union_nested = sum(tup_nested_foo, type_tuple([type_tuple([t_bar])]))
test_fn.("{{:foo}} <: ({{:foo}} | {{:bar}})", true, is_subtype(tup_nested_foo, union_nested))
test_fn.(
"{{:bar}} <: ({{:foo}} | {{:bar}})",
true,
is_subtype(type_tuple([type_tuple([t_bar])]), union_nested)
)
test_fn.(
"{{atom}} <: ({{:foo}} | {{:bar}})",
false,
is_subtype(tup_nested_atom, union_nested)
)
IO.puts("\n--- Section: Edge Cases (any, none) ---")
# A type `{any, none}` should not be possible. The value `none` cannot exist.
# The simplification logic should reduce this to `type_none`.
test_fn.("{any, none} == none", true, tup_with_none == t_none)
# Intersection with a tuple containing none should result in none
intersect_with_none_tuple = intersect(tup_atom_int, tup_with_none)
test_fn.("{atom,int} & {any,none} == none", true, intersect_with_none_tuple == t_none)
# Union with a tuple containing none should be a no-op
union_with_none_tuple = sum(tup_atom_int, tup_with_none)
test_fn.("{atom,int} | {any,none} == {atom,int}", true, union_with_none_tuple == tup_atom_int)
# --- Original tests from problem description for regression ---
IO.puts("\n--- Specific Tuple Subtyping Test (Original) ---")
test_fn.(
"{1, :foo} <: {int_gt_0, :foo | :bar}",
true,
is_subtype(
type_tuple([type_int_eq(1), type_atom_literal(:foo)]),
type_tuple([type_int_gt(0), sum(type_atom_literal(:foo), type_atom_literal(:bar))])
)
)
test_fn.(
"{0, :foo} <: {int_gt_0, :foo | :bar}",
false,
is_subtype(
type_tuple([type_int_eq(0), type_atom_literal(:foo)]),
type_tuple([type_int_gt(0), sum(type_atom_literal(:foo), type_atom_literal(:bar))])
)
)
test_fn.(
"{1, :kek} <: {int_gt_0, :foo | :bar}",
false,
is_subtype(
type_tuple([
type_int_eq(1),
type_atom_literal(:kek)
]),
type_tuple([type_int_gt(0), sum(type_atom_literal(:foo), type_atom_literal(:bar))])
)
)
IO.inspect(Process.get(:test_failures, []), label: "TupleTests failures")
end
end
defmodule ListTests do
import Tdd
def run(test_fn) do
Process.put(:test_failures, [])
Tdd.init_tdd_system()
IO.puts("\n--- Running ListTests ---")
# --- Basic Types ---
t_atom = type_atom()
t_int = type_integer()
t_foo = type_atom_literal(:foo)
t_bar = type_atom_literal(:bar)
t_any = type_any()
t_none = type_none()
# --- List Types ---
t_list = type_list()
t_empty_list = type_empty_list()
# [atom | list]
t_cons_atom_list = type_cons(t_atom, t_list)
# [:foo | []]
t_cons_foo_empty = type_cons(t_foo, t_empty_list)
# [atom | []]
t_cons_atom_empty = type_cons(t_atom, t_empty_list)
# [any | []]
t_cons_any_empty = type_cons(t_any, t_empty_list)
# [integer | list]
t_cons_int_list = type_cons(t_int, t_list)
IO.puts("\n--- Section: Basic List Subtyping ---")
test_fn.("[] <: list", true, is_subtype(t_empty_list, t_list))
test_fn.("list <: []", false, is_subtype(t_list, t_empty_list))
test_fn.("[atom|list] <: list", true, is_subtype(t_cons_atom_list, t_list))
test_fn.("list <: [atom|list]", false, is_subtype(t_list, t_cons_atom_list))
test_fn.("[] <: [atom|list]", false, is_subtype(t_empty_list, t_cons_atom_list))
test_fn.("[atom|list] <: []", false, is_subtype(t_cons_atom_list, t_empty_list))
test_fn.("list <: atom", false, is_subtype(t_list, t_atom))
test_fn.("atom <: list", false, is_subtype(t_atom, t_list))
IO.puts("\n--- Section: Cons Subtyping (Covariance) ---")
# Head is a subtype
test_fn.("[:foo|[]] <: [atom|[]]", true, is_subtype(t_cons_foo_empty, t_cons_atom_empty))
test_fn.("[atom|[]] <: [:foo|[]]", false, is_subtype(t_cons_atom_empty, t_cons_foo_empty))
# Tail is a subtype
test_fn.("[atom|[]] <: [atom|list]", true, is_subtype(t_cons_atom_empty, t_cons_atom_list))
test_fn.("[atom|list] <: [atom|[]]", false, is_subtype(t_cons_atom_list, t_cons_atom_empty))
# Both are subtypes
test_fn.("[:foo|[]] <: [atom|list]", true, is_subtype(t_cons_foo_empty, t_cons_atom_list))
# Neither is a subtype
test_fn.("[atom|list] <: [:foo|[]]", false, is_subtype(t_cons_atom_list, t_cons_foo_empty))
# A list of length 1 is a subtype of a list of any element of length 1
test_fn.("[atom|[]] <: [any|[]]", true, is_subtype(t_cons_atom_empty, t_cons_any_empty))
IO.puts("\n--- Section: List Intersection ---")
# [atom|list] & [integer|list] -> should be none due to head conflict
intersect1 = intersect(t_cons_atom_list, t_cons_int_list)
test_fn.("[atom|list] & [integer|list] == none", true, intersect1 == t_none)
# [any|[]] & [atom|list] -> should be [atom|[]]
intersect2 = intersect(t_cons_any_empty, t_cons_atom_list)
test_fn.("([any|[]] & [atom|list]) == [atom|[]]", true, intersect2 == t_cons_atom_empty)
# [] & [atom|list] -> should be none because one is empty and one is not
intersect3 = intersect(t_empty_list, t_cons_atom_list)
test_fn.("[] & [atom|list] == none", true, intersect3 == t_none)
IO.puts("\n--- Section: List Union ---")
# [] | [atom|[]]
union1 = sum(t_empty_list, t_cons_atom_empty)
test_fn.("[] <: ([] | [atom|[]])", true, is_subtype(t_empty_list, union1))
test_fn.("[atom|[]] <: ([] | [atom|[]])", true, is_subtype(t_cons_atom_empty, union1))
test_fn.(
"[integer|[]] <: ([] | [atom|[]])",
false,
is_subtype(type_cons(t_int, t_empty_list), union1)
)
# [:foo|[]] | [:bar|[]]
union2 = sum(t_cons_foo_empty, type_cons(t_bar, t_empty_list))
# This union is a subtype of [atom|[]]
test_fn.("([:foo|[]] | [:bar|[]]) <: [atom|[]]", true, is_subtype(union2, t_cons_atom_empty))
test_fn.("[atom|[]] <: ([:foo|[]] | [:bar|[]])", false, is_subtype(t_cons_atom_empty, union2))
IO.puts("\n--- Section: List Negation ---")
# list is a subtype of not(atom)
test_fn.("list <: ¬atom", true, is_subtype(t_list, negate(t_atom)))
# A non-empty list is a subtype of not an empty list
test_fn.("[atom|list] <: ¬[]", true, is_subtype(t_cons_atom_list, negate(t_empty_list)))
# [integer|list] is a subtype of not [atom|list]
test_fn.(
"[integer|list] <: ¬[atom|list]",
true,
is_subtype(t_cons_int_list, negate(t_cons_atom_list))
)
IO.inspect(Process.get(:test_failures, []), label: "ListTests failures")
end
end
defmodule ListOfTests do
import Tdd
def run(test_fn) do
Process.put(:test_failures, [])
Tdd.init_tdd_system()
IO.puts("\n--- Running ListOfTests ---")
# --- Basic Types ---
t_atom = type_atom()
t_int = type_integer()
t_pos_int = type_int_gt(0)
t_int_5 = type_int_eq(5)
# --- list(X) Types ---
t_list_of_int = type_list_of(t_int)
t_list_of_pos_int = type_list_of(t_pos_int)
t_list_of_atom = type_list_of(t_atom)
# --- Specific List Types ---
t_list = type_list()
t_empty_list = type_empty_list()
# [5]
t_list_one_int = type_cons(t_int_5, t_empty_list)
# [:foo]
t_list_one_atom = type_cons(type_atom_literal(:foo), t_empty_list)
# [5, :foo]
t_list_int_and_atom = type_cons(t_int_5, type_cons(type_atom_literal(:foo), t_empty_list))
IO.puts("\n--- Section: Basic list(X) Subtyping ---")
test_fn.("list(integer) <: list()", true, is_subtype(t_list_of_int, t_list))
test_fn.("list() <: list(integer)", false, is_subtype(t_list, t_list_of_int))
test_fn.("[] <: list(integer)", true, is_subtype(t_empty_list, t_list_of_int))
test_fn.("[5] <: list(integer)", true, is_subtype(t_list_one_int, t_list_of_int))
test_fn.("[:foo] <: list(integer)", false, is_subtype(t_list_one_atom, t_list_of_int))
test_fn.("[5, :foo] <: list(integer)", false, is_subtype(t_list_int_and_atom, t_list_of_int))
test_fn.(
"[5, :foo] <: list(any)",
true,
is_subtype(t_list_int_and_atom, type_list_of(type_any()))
)
IO.puts("\n--- Section: Covariance of list(X) ---")
test_fn.(
"list(pos_integer) <: list(integer)",
true,
is_subtype(t_list_of_pos_int, t_list_of_int)
)
test_fn.(
"list(integer) <: list(pos_integer)",
false,
is_subtype(t_list_of_int, t_list_of_pos_int)
)
IO.puts("\n--- Section: Intersection of list(X) ---")
# list(integer) & list(pos_integer) should be list(pos_integer)
intersect1 = intersect(t_list_of_int, t_list_of_pos_int)
test_fn.(
"(list(int) & list(pos_int)) == list(pos_int)",
true,
intersect1 == t_list_of_pos_int
)
# list(integer) & list(atom) should be just [] (empty list is the only common member)
# The system simplifies this intersection to a type that only accepts the empty list.
intersect2 = intersect(t_list_of_int, t_list_of_atom)
test_fn.("[] <: (list(int) & list(atom))", true, is_subtype(t_empty_list, intersect2))
test_fn.("[5] <: (list(int) & list(atom))", false, is_subtype(t_list_one_int, intersect2))
test_fn.("[:foo] <: (list(int) & list(atom))", false, is_subtype(t_list_one_atom, intersect2))
# It should be equivalent to `type_empty_list`
test_fn.("(list(int) & list(atom)) == []", true, intersect2 == t_empty_list)
IO.puts("\n--- Section: Intersection of list(X) with cons ---")
# list(integer) & [:foo | []] -> should be none
intersect3 = intersect(t_list_of_int, t_list_one_atom)
test_fn.("list(integer) & [:foo] == none", true, intersect3 == type_none())
# list(integer) & [5 | []] -> should be [5 | []]
intersect4 = intersect(t_list_of_int, t_list_one_int)
test_fn.("list(integer) & [5] == [5]", true, intersect4 == t_list_one_int)
# list(integer) & [5, :foo] -> should be none
intersect5 = intersect(t_list_of_int, t_list_int_and_atom)
test_fn.("list(integer) & [5, :foo] == none", true, intersect5 == type_none())
IO.inspect(Process.get(:test_failures, []), label: "ListOfTests failures")
end
end
defmodule AdhocTest do
import Tdd
def run(test_fn) do
# --- Basic Types ---
t_atom = type_atom()
t_int = type_integer()
t_pos_int = type_int_gt(0)
t_int_5 = type_int_eq(5)
# --- list(X) Types ---
t_list_of_int = type_list_of(t_int)
t_list_of_pos_int = type_list_of(t_pos_int)
t_list_of_atom = type_list_of(t_atom)
# --- Specific List Types ---
t_list = type_list()
t_empty_list = type_empty_list()
# [5]
t_list_one_int = type_cons(t_int_5, t_empty_list)
# [:foo]
t_list_one_atom = type_cons(type_atom_literal(:foo), t_empty_list)
# [5, :foo]
t_list_int_and_atom = type_cons(t_int_5, type_cons(type_atom_literal(:foo), t_empty_list))
intersect4 = intersect(t_list_of_int, t_list_one_int)
IO.inspect("first_subtype")
a = is_subtype(intersect4, t_list_one_int)
IO.inspect("second_subtype")
b = is_subtype(t_list_one_int, intersect4)
test_fn.("list(integer) & [5] == [5]", true, a == b)
end
end
test_all.()