Unanswered: VFP - need to import text file with more than 256 fields
I wrote a program that uses the low-level file functions to read the text file one line at a time. I then parse the line to get the fields and write them to 2 tables to get around the limitation of 256 fields.
The program is very slow and I assume it's because I'm only reading in one line at a time. Any suggestions on a more efficient way of handling this?
Yes, Excel has a record limit of 65,536 rows (records) by 256 columns (fields), but if your files can work within those limitations, it might be much better (read as faster) than parsing text in a low-level read.
When you are finished with the Import of the Text file you Select the individual Columns & Rows for export to your individual DBF's.
I think you've got about in the most efficient way.
There are a couple things I am going to suggest that may increase performance, but no guarantees.
If you are using FREAD(), you may consider FGETS() instead to read an entire line.
Do you have the tables opened exclusively? If not, try that.
Don't use any indexes when importing. Build them after the import has finished.
If you are using a network, move the files to your hard drive if you can.
I'm thinking of using fread() and fseek() to read the file in large chunks. I'm hoping to find a simpler solution though.
The 'Hacker's Guide to VFP' suggests putting the entire file into a memo field and using the memo functions to split out the records. I have no idea how effectively VFP will function with a 1GB memo, but I figure that any solution that makes use of VFP's file buffering will probably be a lot more effective than what I'm currently doing.