I have a dataframw with rows = 2million.
I was wondering if there are better format than CSV file.
If I am using CSV file, while reading the file again, will there be any lost of data?
Depends on your definition of "better". The advantage of csv is that it's just plaintext and can be reopened anywhere.
For more speed, you can look at the {fst} and {qs} packages. But these formats can only be read in R (with the same package).
For speed, but with more interoperability, the parquet and feather formats are also optimized for large datasets, but have libraries available in other programming languages.
If you're only going to use part of this dataset at once, a database like duckdb will allow you to keep the data on disk and only load what you need when you need it.
There are other possibilities that could be "better" in some context, e.g. disk.frame or SQLite.
If it's only numbers (as in your example), there shouldn't be any data lost (unless there is some kind of mixup between dot and comma, or things like rownames that are hanging). If you have R objects with attributes (i.e. not a simple data.frame), or if some columns should have a particular class (for example integer vs double), that will be lost. You can usually keep those if using RDS or qs.