ezpz.data.hfΒΆ
- See ezpz/data/
hf.py
ezpz/datasets/hf.py
HuggingFace Datasets loading and tokenization.
ToyTextDataset
ΒΆ
Bases: Dataset
Pads or truncates sentences to a fixed length.
Source code in src/ezpz/data/hf.py
build_vocab(texts)
ΒΆ
Create a tiny vocabulary from a list of strings.
Source code in src/ezpz/data/hf.py
get_hf_text_dataset(*, dataset_name, split, text_column, tokenizer_name, seq_len, limit, seed)
ΒΆ
Build a tokenized HF dataset with input_ids + attention_mask.
Returns:
| Type | Description |
|---|---|
tuple[Dataset, AutoTokenizer]
|
tokenized dataset (torch formatted) and tokenizer. |
Source code in src/ezpz/data/hf.py
load_hf_texts(dataset_name, split, text_column, limit)
ΒΆ
Pull a small slice of text from a Hugging Face dataset for quick experiments.
This uses only a limited number of rows (limit) to keep the example light.
Source code in src/ezpz/data/hf.py
split_dataset(data_args, train_split_name='train', validation_split_name=None, cache_dir=None, token=None, trust_remote_code=False)
ΒΆ
Splits the dataset into training and validation sets based on the provided split names.
Args:
Source code in src/ezpz/data/hf.py
238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | |