CRAN Package Check Results for Package GitAI

Last updated on 2025-05-18 20:49:44 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.1.0 6.15 31.16 37.31 OK
r-devel-linux-x86_64-debian-gcc 0.1.0 3.63 24.00 27.63 ERROR
r-devel-linux-x86_64-fedora-clang 0.1.0 57.78 ERROR
r-devel-linux-x86_64-fedora-gcc 0.1.0 72.01 OK
r-devel-windows-x86_64 0.1.0 8.00 50.00 58.00 OK
r-patched-linux-x86_64 0.1.0 5.53 28.44 33.97 OK
r-release-linux-x86_64 0.1.0 5.28 28.24 33.52 OK
r-release-macos-arm64 0.1.0 26.00 OK
r-release-macos-x86_64 0.1.0 70.00 OK
r-release-windows-x86_64 0.1.0 8.00 70.00 78.00 OK
r-oldrel-macos-arm64 0.1.0 33.00 OK
r-oldrel-macos-x86_64 0.1.0 54.00 OK
r-oldrel-windows-x86_64 0.1.0 9.00 56.00 65.00 OK

Check Details

Version: 0.1.0
Check: tests
Result: ERROR Running ‘testthat.R’ [3s/4s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(rlang) Attaching package: 'rlang' The following objects are masked from 'package:testthat': is_false, is_null, is_true > library(GitAI) > > test_check("GitAI") [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] ══ Skipped tests (7) ═══════════════════════════════════════════════════════════ • OPENAI_API_KEY env var is not configured (5): 'test-process_content.R:3:3', 'test-process_content.R:17:3', 'test-process_repos.R:5:3', 'test-set_llm.R:4:3', 'test-set_llm.R:15:3' • On CRAN (2): 'test-add_files.R:22:3', 'test-set_repos.R:2:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-set_llm.R:36:3'): setting LLM with default provider ─────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project) at test-set_llm.R:36:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:60:3'): setting arguments for selected provider ───── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project, provider = "openai", model = "model_mocked") at test-set_llm.R:60:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "model_mocked", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:89:3'): setting LLM without system prompt ──────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(initialize_project("gitai_test_project")) at test-set_llm.R:89:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:105:3'): setting system prompt ─────────────────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. ├─GitAI::set_prompt(set_llm(my_project), system_prompt = "You always return only 'Hi there!'") at test-set_llm.R:105:3 2. └─GitAI::set_llm(my_project) 3. ├─rlang::exec(provider_method, !!!provider_args) 4. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 5. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 6. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 7. └─ellmer (local) `<S7_class>`(...) 8. ├─S7::new_object(...) 9. └─ellmer::Provider(...) [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.1.0
Check: tests
Result: ERROR Running ‘testthat.R’ [8s/17s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(rlang) Attaching package: 'rlang' The following objects are masked from 'package:testthat': is_false, is_null, is_true > library(GitAI) > > test_check("GitAI") [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] ══ Skipped tests (7) ═══════════════════════════════════════════════════════════ • OPENAI_API_KEY env var is not configured (5): 'test-process_content.R:3:3', 'test-process_content.R:17:3', 'test-process_repos.R:5:3', 'test-set_llm.R:4:3', 'test-set_llm.R:15:3' • On CRAN (2): 'test-add_files.R:22:3', 'test-set_repos.R:2:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-set_llm.R:36:3'): setting LLM with default provider ─────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project) at test-set_llm.R:36:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:60:3'): setting arguments for selected provider ───── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project, provider = "openai", model = "model_mocked") at test-set_llm.R:60:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "model_mocked", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:89:3'): setting LLM without system prompt ──────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(initialize_project("gitai_test_project")) at test-set_llm.R:89:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:105:3'): setting system prompt ─────────────────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. ├─GitAI::set_prompt(set_llm(my_project), system_prompt = "You always return only 'Hi there!'") at test-set_llm.R:105:3 2. └─GitAI::set_llm(my_project) 3. ├─rlang::exec(provider_method, !!!provider_args) 4. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 5. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 6. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 7. └─ellmer (local) `<S7_class>`(...) 8. ├─S7::new_object(...) 9. └─ellmer::Provider(...) [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang