%0 Journal Article %T In-Document Table Data Inference Based on LLM-ICL Model %A Xinrui Dou %J Open Access Library Journal %V 12 %N 12 %P 1-13 %@ 2333-9721 %D 2025 %I Open Access Library %R 10.4236/oalib.1114603 %X This paper proposes a structured data prediction method based on Large Language Models with In-Context Learning (LLM-ICL). The method designs sample selection strategies to choose samples closely related to the prediction task and converts structured data into text sequences, which are then provided as input to large language models for prediction through in-context learning. To validate the effectiveness of the method, experiments were conducted using the IPUMS dataset. In the few-shot setting with only 10 demonstration samples, Results demonstrate that with extremely limited samples (only 10), the best-performing model Qwen-plus achieves a prediction accuracy of 79.4%, significantly outperforming traditional supervised machine learning algorithms trained on the same sample size (XGBoost at 73.5% and KNN at 71.1%). Further analysis reveals that KNN and XGBoost require approximately 500 and 16,000 samples respectively to achieve comparable accuracy levels to LLM-ICL using just 10 samples. Additionally, sample selection strategy significantly impacts performance¡ªemploying nearest neighbor sampling further enhances accuracy compared to random selection. This research demonstrates the substantial potential and application value of LLM-ICL in few-shot structured data prediction tasks. %K Large Language Models %K In-Context Learning %K Table Reasoning %K Few-Shot Learning %U http://www.oalib.com/paper/6880165