Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
BA Agent Post-Training Experiment
This directory is a standalone experiment workspace copied out of the main training path.
Training objective:
- Keep
data_insightisolated from long-chain planning and report writing. - Reuse one shared base model with role-specific LoRA adapters.
- Optimize evidence-grounded analysis and conclusion reliability before attempting long-report end-to-end RL.
Recommended order:
- Train
data_insightwith SFT. - Train
writerwith SFT. - Run
writerDPO and reward modeling on BA preference data. - Optionally run PPO with the reward adapter on prompt-only tasks.
Supported SFT schema:
{"system":"optional role prompt","prompt":"question or analysis context","response":"target answer"}
Supported automatically:
instruction+input+outputquestion+answermessages/conversationswhere the final assistant turn is the target
Supported DPO / reward schema:
{"system":"optional role prompt","prompt":"same prompt shown to both candidates","chosen":"preferred answer","rejected":"dispreferred answer"}
Supported automatically:
question+response_chosen+response_rejectedAnthropic/hh-rlhfstylechosen/rejectedPKU-Alignment/PKU-SafeRLHF-*style pairwise columns
Supported PPO prompt schema:
{"system":"optional role prompt","prompt":"generation prompt only"}
Suggested role split:
data_insight: facts, supported insights, evidence refs, uncertainty onlywriter: briefs, chatbot answers, and section drafts that consume structured evidence
Files in this experiment:
utils.py: local copy of shared training helpers and argumentsdata_adapter.py: local schema normalizer for SFT / DPO / reward / PPOsft.py,dpo.py,reward_model.py,ppo_multi_adapter.py: experiment training entrypointsmerge_adapter.py,ma_ppo_config.py,ma_ppo_trainer.py: copied dependencies needed by this experimentdata/*.sample.jsonl: schema examples and smoke-test inputsscripts/run_ba_role_*.sh: standalone run scripts
Quick start:
bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_sft.sh \
ROLE_NAME=data_insight \
SFT_DATASET_NAME=./experiments/ba_agent_posttrain/data/data_insight_sft.sample.jsonl
bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_sft.sh \
ROLE_NAME=writer \
SFT_DATASET_NAME=./experiments/ba_agent_posttrain/data/writer_sft.sample.jsonl
bash ./experiments/ba_agent_posttrain/scripts/run_ba_role_dpo.sh \
ROLE_NAME=writer \
PREFERENCE_DATASET_NAME=./experiments/ba_agent_posttrain/data/writer_preference.sample.jsonl
For real runs, replace the sample files with your own:
data_insight_sft.jsonlwriter_sft.jsonlwriter_preference.jsonlwriter_prompts.jsonl
- Downloads last month
- 6