Knit to PDF OR Word

Hi im new to R Studio and i try to make Rmarkdown file i run it and it work correctly but when i want to knit or word or pfd i got this message:
ERROR in eval (expr,envir,enclos): object'all_trips_v2' not found
calls : < Anonymous>...withvisible -> eval_with_user_handlers -> eval -> eval
Execution halted

---
title: "TripTrainnig"
author: "Hafed"
date: "2023-06-15"
output: word_document
---

## my fisrt mrkdown report
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```

## importing library
```{r library }
library(tidyverse)
library(lubridate)
library(ggplot2)
```

### Data Exploration
#### Data Frame
```{r Data Frame}
df <- all_trips_v2

```
#### summry Data Frame
```{r summary}
summary(df)
```

Hi @Hafed.gh , you need load the data in a chunk.

For example is the data is excel file you need put something like this, but according to you pc folders. And load in the first chunks because when you knit this make in order.

library(readxl)
datos <- read_excel("D:\\Documents\\Solicitud\\Luis\\Power BI\\BDISTRIBUCION_FULL.xlsx")

Other way is set the work directory and load the data

1 Like

Hi M_AcostaCH first thank you very much yess it works but if you dont mind i have another question related to this topic wich is in my example i join 12 table then i join them in one table is this case what i do?
m1_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202201-divvy-tripdata.csv")
m2_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202202-divvy-tripdata.csv")
m3_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202203-divvy-tripdata.csv")
m4_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202204-divvy-tripdata.csv")
m5_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202205-divvy-tripdata.csv")
m6_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202206-divvy-tripdata.csv")
m7_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202207-divvy-tripdata.csv")
m8_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202208-divvy-tripdata.csv")
m9_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202209-divvy-tripdata.csv")
m10_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202210-divvy-tripdata.csv")
m11_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202211-divvy-tripdata.csv")
m12_2022 <- read.csv("C:/Users/Admin/Documents/tripdataCSV/202212-divvy-tripdata.csv")
all_trips <- bind_rows(m1_2022,m2_2022,m3_2022,m4_2022,m5_2022,m6_2022,m7_2022,m8_2022,m9_2022,m10_2022,m11_2022,m12_2022)

The bind_rows is used to combine rows from multiple datasets into one. This function is useful when you have data sets with the same structure (same number and name of columns) and you want to bind them vertically.

You could merge all the data into a separate script and then save a single file with all the data. Then load into the rmd document script. Although it should not generate any problem if you do it in a chunk.

Hi M_AcostaCH thank you very much in helping me Is this issue but I have question when I can save the file on my PC, I use this method:

library(readxl)

datos <- read_excel("D:\Documents\Solicitud\Luis\Power BI\BDISTRIBUCION_FULL.xlsx")

But when I face a huge data file and I cant save it on my PC what I should do in this case?

The read_excel() is for load the data.

For save data in excel format I like use library(openxlsx):

install.packages('openxlsx')

library(openxlsx)

# merge the data in all_trips

write.xlsx(all_trips , "C:/Users/Admin/Documents/tripdataCSV/all_trips-data.xlsx")

This save the .xlsx file in Documents in your pc. If the data is a huge you could save a half and paste manually in a file. Other way is close Rstudio and only load the data for this work.
Maybe when you have a all_trips data, you could delete the other m1_2022 data with rm('m1_2022') for clear you pc memory.

thank you very much for your help

1 Like

@Hafed.gh please next time format your code correctly - thanks !

FAQ: How to Format R Markdown Source

This topic was automatically closed 45 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.