all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* opening large files (few hundred meg)
@ 2008-01-28 17:35 Xah Lee
  2008-01-28 18:05 ` Sven Joachim
  0 siblings, 1 reply; 34+ messages in thread
From: Xah Lee @ 2008-01-28 17:35 UTC (permalink / raw)
  To: help-gnu-emacs

If i want to process a huge file (my weekly log file is about 0.5GB),
what can i do?

i tried to open and it says maximum buffer size exceeded.

• How can i increase the limit?

• is there a general solution to work with files (elisp) without
actually loading the whole file?

Thanks in advance.

  Xah
  xah@xahlee.org
∑ http://xahlee.org/^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 17:35 opening large files (few hundred meg) Xah Lee
@ 2008-01-28 18:05 ` Sven Joachim
  2008-01-28 19:31   ` Eli Zaretskii
       [not found]   ` <mailman.6646.1201548710.18990.help-gnu-emacs@gnu.org>
  0 siblings, 2 replies; 34+ messages in thread
From: Sven Joachim @ 2008-01-28 18:05 UTC (permalink / raw)
  To: help-gnu-emacs

On 2008-01-28 18:35 +0100, Xah Lee wrote:

> If i want to process a huge file (my weekly log file is about 0.5GB),
> what can i do?

If it is so huge, you may want to rotate it daily rather than weekly.
See logrotate(8), for instance.

> i tried to open and it says maximum buffer size exceeded.
>
> • How can i increase the limit?

Use a 64-bit system.

> • is there a general solution to work with files (elisp) without
> actually loading the whole file?

Not really, since visiting a file reads all of it into an Emacs buffer.
If the file is too large, you can split(1) it into smaller chunks that
Emacs can process.

Sven

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 18:05 ` Sven Joachim
@ 2008-01-28 19:31   ` Eli Zaretskii
  2008-01-28 20:36     ` Andreas Röhler
       [not found]     ` <mailman.6652.1201552566.18990.help-gnu-emacs@gnu.org>
       [not found]   ` <mailman.6646.1201548710.18990.help-gnu-emacs@gnu.org>
  1 sibling, 2 replies; 34+ messages in thread
From: Eli Zaretskii @ 2008-01-28 19:31 UTC (permalink / raw)
  To: help-gnu-emacs

> From: Sven Joachim <svenjoac@gmx.de>
> Date: Mon, 28 Jan 2008 19:05:41 +0100
> 
> > • How can i increase the limit?
> 
> Use a 64-bit system.

Yes, that's the only practical way, sans splitting the file outside of
Emacs.

> > • is there a general solution to work with files (elisp) without
> > actually loading the whole file?
> 
> Not really, since visiting a file reads all of it into an Emacs buffer.

The problem is not with the buffer size per se, it's with the fact
that Emacs needs to be able to address each byte of the file's text
with an Emacs integer data type, which is 29 bit wide on 32-bit
machines.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 19:31   ` Eli Zaretskii
@ 2008-01-28 20:36     ` Andreas Röhler
       [not found]     ` <mailman.6652.1201552566.18990.help-gnu-emacs@gnu.org>
  1 sibling, 0 replies; 34+ messages in thread
From: Andreas Röhler @ 2008-01-28 20:36 UTC (permalink / raw)
  To: help-gnu-emacs; +Cc: Xah Lee s, Sven Joachim

Am Montag, 28. Januar 2008 20:31 schrieb Eli Zaretskii:
> > From: Sven Joachim <svenjoac@gmx.de>
> > Date: Mon, 28 Jan 2008 19:05:41 +0100
> >
> > > • How can i increase the limit?
> >
> > Use a 64-bit system.
>
> Yes, that's the only practical way, sans splitting the file outside of
> Emacs.
>
> > > • is there a general solution to work with files (elisp) without
> > > actually loading the whole file?
> >
> > Not really, since visiting a file reads all of it into an Emacs buffer.
>
> The problem is not with the buffer size per se, it's with the fact
> that Emacs needs to be able to address each byte of the file's text
> with an Emacs integer data type, which is 29 bit wide on 32-bit
> machines.
>

What about a large-text-mode: sed reads chunks of text
in, one by one, letting Emacs forget the rest. Than
only the line-no matters, sent to sed.

Andreas Röhler

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]     ` <mailman.6652.1201552566.18990.help-gnu-emacs@gnu.org>
@ 2008-01-28 21:50       ` Jason Rumney
  2008-01-29  7:07         ` Andreas Röhler
                           ` (4 more replies)
  2008-01-29 10:43       ` Johan Bockgård
  2008-01-29 16:33       ` Ted Zlatanov
  2 siblings, 5 replies; 34+ messages in thread
From: Jason Rumney @ 2008-01-28 21:50 UTC (permalink / raw)
  To: help-gnu-emacs

On 28 Jan, 20:36, Andreas Röhler <andreas.roeh...@online.de> wrote:

> What about a large-text-mode: sed reads chunks of text
> in, one by one, letting Emacs forget the rest. Than
> only the line-no matters, sent to sed.

Such solutions have been proposed before, but the likely way that a
user will navigate through such a huge file is by searching, so just
paging parts of the file in and out is only part of the solution, it
must also offer searching to be useful.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 21:50       ` Jason Rumney
@ 2008-01-29  7:07         ` Andreas Röhler
  2008-01-29  7:20         ` Thierry Volpiatto
                           ` (3 subsequent siblings)
  4 siblings, 0 replies; 34+ messages in thread
From: Andreas Röhler @ 2008-01-29  7:07 UTC (permalink / raw)
  To: help-gnu-emacs

Am Montag, 28. Januar 2008 22:50 schrieb Jason Rumney:
> On 28 Jan, 20:36, Andreas Röhler <andreas.roeh...@online.de> wrote:
> > What about a large-text-mode: sed reads chunks of text
> > in, one by one, letting Emacs forget the rest. Than
> > only the line-no matters, sent to sed.
>
> Such solutions have been proposed before, but the likely way that a
> user will navigate through such a huge file is by searching, so just
> paging parts of the file in and out is only part of the solution, it
> must also offer searching to be useful.
> _______________________________________________


Did someone start to write that?

As sed is fast providing chunks, searching and
replacing should work with reasonable speed.

Andreas Röhler

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 21:50       ` Jason Rumney
  2008-01-29  7:07         ` Andreas Röhler
@ 2008-01-29  7:20         ` Thierry Volpiatto
       [not found]         ` <mailman.6666.1201591238.18990.help-gnu-emacs@gnu.org>
                           ` (2 subsequent siblings)
  4 siblings, 0 replies; 34+ messages in thread
From: Thierry Volpiatto @ 2008-01-29  7:20 UTC (permalink / raw)
  To: Jason Rumney; +Cc: help-gnu-emacs

Jason Rumney <jasonrumney@gmail.com> writes:

> On 28 Jan, 20:36, Andreas Röhler <andreas.roeh...@online.de> wrote:
>
>> What about a large-text-mode: sed reads chunks of text
>> in, one by one, letting Emacs forget the rest. Than
>> only the line-no matters, sent to sed.
>
> Such solutions have been proposed before, but the likely way that a
> user will navigate through such a huge file is by searching, so just
> paging parts of the file in and out is only part of the solution, it
> must also offer searching to be useful.

It's possible to do paging and searching with screen (copy-mode ==> C-a [)
In this mode you can mark/copy/paste and search (C-r C-s)
It's possible to tweak commands emacslike  in screenrc:

markkeys "h=^B:l=^F:$=^E:0=^A" 

screen work in ansi-term/term.
But how a text file can become so big ?

-- 
A + Thierry
Pub key: http://pgp.mit.edu

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]         ` <mailman.6666.1201591238.18990.help-gnu-emacs@gnu.org>
@ 2008-01-29  9:08           ` Tim X
  2008-01-29 16:34             ` Xah Lee
  2008-02-06  1:47             ` Samuel Karl Peterson
  2008-01-29 14:52           ` Joel J. Adamson
  1 sibling, 2 replies; 34+ messages in thread
From: Tim X @ 2008-01-29  9:08 UTC (permalink / raw)
  To: help-gnu-emacs

Thierry Volpiatto <thierry.volpiatto@gmail.com> writes:

> Jason Rumney <jasonrumney@gmail.com> writes:
>
>> On 28 Jan, 20:36, Andreas Röhler <andreas.roeh...@online.de> wrote:
>>
>>> What about a large-text-mode: sed reads chunks of text
>>> in, one by one, letting Emacs forget the rest. Than
>>> only the line-no matters, sent to sed.
>>
>> Such solutions have been proposed before, but the likely way that a
>> user will navigate through such a huge file is by searching, so just
>> paging parts of the file in and out is only part of the solution, it
>> must also offer searching to be useful.
>
> It's possible to do paging and searching with screen (copy-mode ==> C-a [)
> In this mode you can mark/copy/paste and search (C-r C-s)
> It's possible to tweak commands emacslike  in screenrc:
>
> markkeys "h=^B:l=^F:$=^E:0=^A" 
>
> screen work in ansi-term/term.
> But how a text file can become so big ?
>
> -- 
> A + Thierry
> Pub key: http://pgp.mit.edu
>
>

Its not that uncommon to encounter text files over half a gig in size. A
place I worked had systems that would generate logs in excess of 1Gb per
day (and that was with minimal logging). When I worked with Oracle,
there were some operations which involved multi Gb files that you needed
to edit (which I did using sed rather than a text editor). 

However, it seems rediculous to attempt to open a text file of the size
Xah is talking about inside an editor. Like others, I have to wonder
why his log file isn't rotated more often so that it is in manageable
chunks. Its obvious that nobody would read all of a text file that was
that large (especially not every week). More than likely, you would use
existing tools to select 'interesting' parts of the log and then deal
with them. Personally, I'd use something like Perl or one of the many
other scripting languages that are ideal for (and largely designed for)
this sort of problem.

Tim



-- 
tcross (at) rapttech dot com dot au

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]     ` <mailman.6652.1201552566.18990.help-gnu-emacs@gnu.org>
  2008-01-28 21:50       ` Jason Rumney
@ 2008-01-29 10:43       ` Johan Bockgård
  2008-01-29 15:35         ` Andreas Röhler
  2008-02-06  1:25         ` Samuel Karl Peterson
  2008-01-29 16:33       ` Ted Zlatanov
  2 siblings, 2 replies; 34+ messages in thread
From: Johan Bockgård @ 2008-01-29 10:43 UTC (permalink / raw)
  To: help-gnu-emacs

Andreas Röhler <andreas.roehler@online.de> writes:

> What about a large-text-mode: sed reads chunks of text in, one by one,
> letting Emacs forget the rest. Than only the line-no matters, sent to
> sed.

How about using FUSE?

http://chunkfs.florz.de/
http://sourceforge.net/projects/joinsplitfs/


-- 
Johan Bockgård

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]         ` <mailman.6666.1201591238.18990.help-gnu-emacs@gnu.org>
  2008-01-29  9:08           ` Tim X
@ 2008-01-29 14:52           ` Joel J. Adamson
  1 sibling, 0 replies; 34+ messages in thread
From: Joel J. Adamson @ 2008-01-29 14:52 UTC (permalink / raw)
  To: help-gnu-emacs

Thierry Volpiatto <thierry.volpiatto@gmail.com> writes:

[...]


> screen work in ansi-term/term.
> But how a text file can become so big ?


Good question: perhaps it's a log file and its size is trying to tell
you something.

Do you really want to edit it or just view it?

Joel

-- 
Joel J. Adamson
Biostatistician
Pediatric Psychopharmacology Research Unit
Massachusetts General Hospital
Boston, MA  02114
(617) 643-1432
(303) 880-3109

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 10:43       ` Johan Bockgård
@ 2008-01-29 15:35         ` Andreas Röhler
  2008-02-06  1:25         ` Samuel Karl Peterson
  1 sibling, 0 replies; 34+ messages in thread
From: Andreas Röhler @ 2008-01-29 15:35 UTC (permalink / raw)
  To: help-gnu-emacs; +Cc: Johan Bockgård

Am Dienstag, 29. Januar 2008 11:43 schrieb Johan Bockgård:
> Andreas Röhler <andreas.roehler@online.de> writes:
> > What about a large-text-mode: sed reads chunks of text in, one by one,
> > letting Emacs forget the rest. Than only the line-no matters, sent to
> > sed.
>
> How about using FUSE?
>
> http://chunkfs.florz.de/
> http://sourceforge.net/projects/joinsplitfs/


Looks interesting, thanks for the hint. 

Andreas Röhler

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]     ` <mailman.6652.1201552566.18990.help-gnu-emacs@gnu.org>
  2008-01-28 21:50       ` Jason Rumney
  2008-01-29 10:43       ` Johan Bockgård
@ 2008-01-29 16:33       ` Ted Zlatanov
  2 siblings, 0 replies; 34+ messages in thread
From: Ted Zlatanov @ 2008-01-29 16:33 UTC (permalink / raw)
  To: help-gnu-emacs

On Mon, 28 Jan 2008 21:36:50 +0100 Andreas Röhler <andreas.roehler@online.de> wrote: 

AR> What about a large-text-mode: sed reads chunks of text
AR> in, one by one, letting Emacs forget the rest. Than
AR> only the line-no matters, sent to sed.

That's a sensible idea.  I think it should act like narrow, but you are
not allowed to widen, only redefine the narrowing criteria (line
region/byte region/regular expression/etc) and you can also grow/shrink
the window up to a practical limit.  Most of such a mode's filtering
functions should be implemented in C, so it's not an easy task.

Ted

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29  9:08           ` Tim X
@ 2008-01-29 16:34             ` Xah Lee
  2008-01-29 19:06               ` Tom Tromey
                                 ` (3 more replies)
  2008-02-06  1:47             ` Samuel Karl Peterson
  1 sibling, 4 replies; 34+ messages in thread
From: Xah Lee @ 2008-01-29 16:34 UTC (permalink / raw)
  To: help-gnu-emacs

Tim X wrote:

> Personally, I'd use something like Perl or one of the many
> other scripting languages that are ideal for (and largely designed for)
> this sort of problem.

A interesting thing about wanting to use elisp to open large file, for
me, is this:

Recently i discovered that emacs lisp is probably the most powerful
lang for processing text, far more so than Perl. Because, in emacs,
there's the “buffers” infra-structure, which allows ones to navigate a
point back and forth, delete, insert, regex search, etc, with
literally few thousands text-processing functions build-in to help
this task.

While in perl or python, typically one either reads the file one line
at a time and process it one line at a time, or read the whole file
one shot but basically still process it one line at a time. The gist
is that, any function you might want to apply to the text is only
applied to a line at a time, and it can't see what's before or after
the line. (one could write it so that it “buffers” the neighboring
lines, but that's rather unusual and involves more code.
Alternatively, one could read in one char at a time, and as well move
the index back and forth, but then that loses all the regex power, and
dealing with files as raw bytes and file pointers is extremely
painful)

The problem with processing one-line at a time is that, for many data
the file is a tree structure (such as HTML/XML, Mathematica source
code). To process a tree struture such as XML, where there is a root
tag at the beginning of the file and closes at the end, and most tree
branches span multiple lines. Processing it line by line is almost
useless. So, in perl, the typical solution is to read in the whole
file, and apply regex to the whole content. This really put stress on
the regex and basically the regex won't work unless the processing
needed is really simple.

A alternative solution to process tree-structured file such as XML, is
to use a proper parser. (e.g. javascript/DOM, or using a libary/
module) However, when using a parser, the nature of programing ceases
to be text-processing but more as strutural manipulation. In general,
the program becomes more complex and difficult. Also, if one uses a
XML parser and DOM, the formatting of the file will also be lost.
(i.e. all your original line endings and indents will be gone)

This is a major reason why, i think emacs lisp's is far more versatile
because it can read in the XML into emacs's buffer infra-structure,
then the programer can move back and forth a point, freely using regex
to search or replace text back and forth. For complex XML processing
such as tree transformation (e.g. XSLT etc), a XML/DOM parser/model is
still more suitable, but for most simple manipulation (such as
processing HTML files), using elisp's buffer and treating it as text
is far easier and flexible. Also, if one so wishes, she can use a XML/
DOM parser/model written in elisp, just as in other lang.

So, last year i switched all new text processing tasks from Perl to
elisp.

But now i have a problem, which i “discovered” this week. What to do
when the file is huge? Normally, one can still just do huge files
since these days memories comes in few gigs. But in my particular
case, my file happens to be 0.5 gig, that i couldn't even open it in
emacs (presumbly because i need a 64 bit OS/hardware. Thanks). So,
given the situation, i'm thinking, perhaps there is a way, to use
emacs lisp to read the file line by line just as perl or python. (The
file is just a apache log file and can be process line by line, can be
split, can be fed to sed/awk/grep with pipes. The reason i want to
open it in emacs and process it using elisp is more just a
exploration, not really a practical need)

  Xah
  xah@xahlee.org
∑ http://xahlee.org/

☄

On Jan 29, 1:08 am, Tim X <t...@nospam.dev.null> wrote:
> Its not that uncommon to encounter text files over half a gig in size. A
> place I worked had systems that would generate logs in excess of 1Gb per
> day (and that was with minimal logging). When I worked with Oracle,
> there were some operations which involved multi Gb files that you needed
> to edit (which I did using sed rather than a text editor).
>
> However, it seems rediculous to attempt to open a text file of the sizeXahis talking about inside an editor. Like others, I have to wonder
> why his log file isn't rotated more often so that it is in manageable
> chunks. Its obvious that nobody would read all of a text file that was
> that large (especially not every week). More than likely, you would use
> existing tools to select 'interesting' parts of the log and then deal
> with them. Personally, I'd use something like Perl or one of the many
> other scripting languages that are ideal for (and largely designed for)
> this sort of problem.
>
> Tim
>
> --
> tcross (at) rapttech dot com dot au

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 16:34             ` Xah Lee
@ 2008-01-29 19:06               ` Tom Tromey
  2008-01-29 20:44                 ` Eli Zaretskii
       [not found]                 ` <mailman.6705.1201639469.18990.help-gnu-emacs@gnu.org>
  2008-01-29 22:10               ` Jason Rumney
                                 ` (2 subsequent siblings)
  3 siblings, 2 replies; 34+ messages in thread
From: Tom Tromey @ 2008-01-29 19:06 UTC (permalink / raw)
  To: help-gnu-emacs

>>>>> "Xah" == Xah Lee <xah@xahlee.org> writes:

Xah> But now i have a problem, which i “discovered” this week. What to do
Xah> when the file is huge? Normally, one can still just do huge files
Xah> since these days memories comes in few gigs. But in my particular
Xah> case, my file happens to be 0.5 gig, that i couldn't even open it in
Xah> emacs (presumbly because i need a 64 bit OS/hardware. Thanks).

Perhaps you could process the file in chunks, using the optional args
to insert-file-contents to put subsets of the file into a buffer.

I haven't tried this myself, so I am not even sure it would work.

Tom

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 19:06               ` Tom Tromey
@ 2008-01-29 20:44                 ` Eli Zaretskii
       [not found]                 ` <mailman.6705.1201639469.18990.help-gnu-emacs@gnu.org>
  1 sibling, 0 replies; 34+ messages in thread
From: Eli Zaretskii @ 2008-01-29 20:44 UTC (permalink / raw)
  To: help-gnu-emacs

> From: Tom Tromey <tromey@redhat.com>
> Date: Tue, 29 Jan 2008 12:06:50 -0700
> 
> Perhaps you could process the file in chunks, using the optional args
> to insert-file-contents to put subsets of the file into a buffer.
> 
> I haven't tried this myself, so I am not even sure it would work.

No need to try: it won't work.  As I wrote earlier in this thread, the
problem is that Emacs cannot address offsets into the buffer larger
than 0.5 gig, and this problem will cause the arguments to
insert-file-contents to overflow exactly like when you read the entire
file.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 16:34             ` Xah Lee
  2008-01-29 19:06               ` Tom Tromey
@ 2008-01-29 22:10               ` Jason Rumney
  2008-01-30 17:08                 ` Joel J. Adamson
  2008-01-31  5:57               ` Tim X
  2008-02-08 11:25               ` Giacomo Boffi
  3 siblings, 1 reply; 34+ messages in thread
From: Jason Rumney @ 2008-01-29 22:10 UTC (permalink / raw)
  To: help-gnu-emacs

On 29 Jan, 16:34, Xah Lee <x...@xahlee.org> wrote:

>   The reason i want to
> open it in emacs and process it using elisp is more just a
> exploration, not really a practical need)

And this summarizes why such a feature does not yet exist in Emacs
nicely. While many have hit the limit and been shocked at finding an
imperfection in Emacs, they then are directed to tools like grep, sed,
perl, head and tail, and realise that for huge files a full screen
editor that works on buffers is an inferior solution to command line
tools that work on streams, and their motivation for somehow
shoehorning this different way of working into Emacs disappears as
there is no practical need.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 21:50       ` Jason Rumney
                           ` (2 preceding siblings ...)
       [not found]         ` <mailman.6666.1201591238.18990.help-gnu-emacs@gnu.org>
@ 2008-01-30 14:55         ` Stefan Monnier
  2008-02-06 16:42         ` Mathias Dahl
  4 siblings, 0 replies; 34+ messages in thread
From: Stefan Monnier @ 2008-01-30 14:55 UTC (permalink / raw)
  To: help-gnu-emacs

>> What about a large-text-mode: sed reads chunks of text
>> in, one by one, letting Emacs forget the rest. Than
>> only the line-no matters, sent to sed.

> Such solutions have been proposed before, but the likely way that a
> user will navigate through such a huge file is by searching, so just
> paging parts of the file in and out is only part of the solution, it
> must also offer searching to be useful.

But now that isearch is able to do things like switch buffer when it
reaches the end of a buffer, we could write a large-file-mode which
"transparently" loads the various chunks while isearching.


        Stefan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]   ` <mailman.6646.1201548710.18990.help-gnu-emacs@gnu.org>
@ 2008-01-30 15:12     ` Stefan Monnier
  2008-01-30 16:55       ` Sven Joachim
  2008-01-31 22:55     ` Ilya Zakharevich
       [not found]     ` <200801312255.m0VMt701019096@powdermilk.math.berkeley.edu>
  2 siblings, 1 reply; 34+ messages in thread
From: Stefan Monnier @ 2008-01-30 15:12 UTC (permalink / raw)
  To: help-gnu-emacs

>> Not really, since visiting a file reads all of it into an Emacs buffer.

> The problem is not with the buffer size per se, it's with the fact
> that Emacs needs to be able to address each byte of the file's text
> with an Emacs integer data type, which is 29 bit wide on 32-bit
> machines.

Well, that's true but even if we lift this restriction (might take some
tedious work and have other downsides, but nothing really difficult),
but it won't help that much:

On a 32bit system, the maximum memory available to a single process is
limited to anywhere between 1GB and 4GB depending on the OS.  If you
consider that the large file will not be the only thing in Emacs's
memory, the best we can hope for is to handle a single 2GB file (and
there can be all kinds of reasons why even this much will fail, e.g. if
the file's content is not put into a unibyte but a multibyte buffer.
And of course, if we ever need to edit the buffer, we may need to grow
the gap, which implies reallocating extra memory which may fail in such
limit cases).  More realistic is 1GB as the upper bound.

Currently, the largest integer is 128MB.  This can be easily bumped up
to 256MB (I'm using such a hack on my local Emacs).  If we're willing to
work a bit more at it (at some small cost in other areas) we can push
this to 512MB.  XEmacs has pushed this even further and has a 1GB limit
(IIUC).  So the integer-size limit and the absolute theoretical maximum
imposed by the OS are about the same.


        Stefan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-30 15:12     ` Stefan Monnier
@ 2008-01-30 16:55       ` Sven Joachim
  2008-01-30 21:53         ` Stefan Monnier
  0 siblings, 1 reply; 34+ messages in thread
From: Sven Joachim @ 2008-01-30 16:55 UTC (permalink / raw)
  To: help-gnu-emacs

On 2008-01-30 16:12 +0100, Stefan Monnier wrote:

> Currently, the largest integer is 128MB.  This can be easily bumped up
> to 256MB (I'm using such a hack on my local Emacs).

Eh?  The largest integer _is_ already (256 MB -1) in Emacs 22:

(* 256 1024 1023)
=> 268173312

(* 256 1024 1024)
=> -268435456

(- (* 256 1024 1024) 1)
=> 268435455

Slightly confused,
                  Sven

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 22:10               ` Jason Rumney
@ 2008-01-30 17:08                 ` Joel J. Adamson
  0 siblings, 0 replies; 34+ messages in thread
From: Joel J. Adamson @ 2008-01-30 17:08 UTC (permalink / raw)
  To: help-gnu-emacs

Jason Rumney <jasonrumney@gmail.com> writes:

> On 29 Jan, 16:34, Xah Lee <x...@xahlee.org> wrote:
>
> imperfection in Emacs

How dare you use those words in the same sentence?

jfk (just kidding); if it were perfect I think it would be less fun to
use.  I strongly suggest using grep, sed and awk just for getting your
feet wet, though you may wish to swim once you're in the pool.

Joel
-- 
Joel J. Adamson
Biostatistician
Pediatric Psychopharmacology Research Unit
Massachusetts General Hospital
Boston, MA  02114
(617) 643-1432
(303) 880-3109

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]                 ` <mailman.6705.1201639469.18990.help-gnu-emacs@gnu.org>
@ 2008-01-30 20:01                   ` Stefan Monnier
  2008-01-30 22:04                     ` Eli Zaretskii
  0 siblings, 1 reply; 34+ messages in thread
From: Stefan Monnier @ 2008-01-30 20:01 UTC (permalink / raw)
  To: help-gnu-emacs

>> Perhaps you could process the file in chunks, using the optional args
>> to insert-file-contents to put subsets of the file into a buffer.
>> I haven't tried this myself, so I am not even sure it would work.

> No need to try: it won't work.  As I wrote earlier in this thread, the
> problem is that Emacs cannot address offsets into the buffer larger
> than 0.5 gig, and this problem will cause the arguments to
> insert-file-contents to overflow exactly like when you read the entire
> file.

You don't have to use the built in limits of insert-file-contents: you
can extract parts of the file using `dd' first (using Elisp floats to
represent the larger integers).

Also it'd be easy enough to extend insert-file-contents (at the C level)
to accept float values for BEG and END (or pairs of integers) so as to
be able to represent larger values.

It's quite doable.  The way I see it, a large-text-buffer would
generally have 3 chunks of N megabytes each, point being in the
middle one.  The 1st and 3rd chunks would be covered with
a `point-entered' property that would automatically slide the window
forward or backward to bring point back into the middle chunk.
That wouldn't be sufficient to make it all work, but it's probably
a good starting point.


        Stefan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-30 16:55       ` Sven Joachim
@ 2008-01-30 21:53         ` Stefan Monnier
  0 siblings, 0 replies; 34+ messages in thread
From: Stefan Monnier @ 2008-01-30 21:53 UTC (permalink / raw)
  To: help-gnu-emacs

>> Currently, the largest integer is 128MB.  This can be easily bumped up
>> to 256MB (I'm using such a hack on my local Emacs).

> Eh?  The largest integer _is_ already (256 MB -1) in Emacs 22:

Yes, sorry.  I mixed it up: the current limit is indeed 256MB, while my
local hack bumps it up to 512MB.  And XEmacs can go up to 1GB.


        Stefan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-30 20:01                   ` Stefan Monnier
@ 2008-01-30 22:04                     ` Eli Zaretskii
  0 siblings, 0 replies; 34+ messages in thread
From: Eli Zaretskii @ 2008-01-30 22:04 UTC (permalink / raw)
  To: help-gnu-emacs

> From: Stefan Monnier <monnier@iro.umontreal.ca>
> Date: Wed, 30 Jan 2008 15:01:44 -0500
> 
> >> Perhaps you could process the file in chunks, using the optional args
> >> to insert-file-contents to put subsets of the file into a buffer.
> >> I haven't tried this myself, so I am not even sure it would work.
> 
> > No need to try: it won't work.  As I wrote earlier in this thread, the
> > problem is that Emacs cannot address offsets into the buffer larger
> > than 0.5 gig, and this problem will cause the arguments to
> > insert-file-contents to overflow exactly like when you read the entire
> > file.
> 
> You don't have to use the built in limits of insert-file-contents: you
> can extract parts of the file using `dd' first (using Elisp floats to
> represent the larger integers).

I was responding to a suggestion to use the optional args of
insert-file-contents to slice the file.  There are lots of other ways
of doing that, but they are unrelated to insert-file-contents being
able to read just a portion of a file, and to my response which you
quote.

> Also it'd be easy enough to extend insert-file-contents (at the C level)
> to accept float values for BEG and END (or pairs of integers) so as to
> be able to represent larger values.

One can hack Emacs to do anything -- this is Free Software, after
all.  But the OP wanted a way to visit large files without any
hacking, just by using existing facilities.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 16:34             ` Xah Lee
  2008-01-29 19:06               ` Tom Tromey
  2008-01-29 22:10               ` Jason Rumney
@ 2008-01-31  5:57               ` Tim X
  2008-01-31 15:35                 ` Stefan Monnier
  2008-02-08 11:25               ` Giacomo Boffi
  3 siblings, 1 reply; 34+ messages in thread
From: Tim X @ 2008-01-31  5:57 UTC (permalink / raw)
  To: help-gnu-emacs

Xah Lee <xah@xahlee.org> writes:
>
> But now i have a problem, which i “discovered” this week. What to do
> when the file is huge? Normally, one can still just do huge files
> since these days memories comes in few gigs. But in my particular
> case, my file happens to be 0.5 gig, that i couldn't even open it in
> emacs (presumbly because i need a 64 bit OS/hardware. Thanks). So,
> given the situation, i'm thinking, perhaps there is a way, to use
> emacs lisp to read the file line by line just as perl or python. (The
> file is just a apache log file and can be process line by line, can be
> split, can be fed to sed/awk/grep with pipes. The reason i want to
> open it in emacs and process it using elisp is more just a
> exploration, not really a practical need)
>

I can understand the motivation. However, as you point out in your post,
the log file you want to process is line oriented and as you also point
out, perl is good a line oriented text processing (Actually, it can
handle other things just fine as well as exemplified by the many modules
that deal with large multi-line structures, such as XML files. 

As mentioned, I can understand the motivation to do something just to
see if it can be done. However, I fail to see any real practicle use in
an emacs mode that would allow editing of extremely large files. As you
pointed out, the emacs solution is good when the programmer/user wants
to move around change text and maybe even change structure using emac'
support for various structures. However, I can't see anybody doing this
type of editing on files that are hundreds of megs in size and if they
are, they really need to re-think what they are doing.  I cannot think
of a single use case where you would have hand edited files that are
hundreds of megs in size. Files of this type are typically generated by
applications and through things like logging. You don't get hand crafted
XML files that are 500Mb in size unless your mad or enjoy inflicting
pain on yourself. 

My personal stance is that you should use the most appropriate tool for
the job not simply the tool you find the coolest or the one which you
are most comfortable with - something about "if the only tool you have
is a hammer, everything looks like a nail" comes to mind. 

I can't see a use case for editing extremely large files with a text
editor and I think there are plenty of good tools for this already. The
fact that once you move to a 64 bit platform, the maximum file size
increases to the point that means there would be even less need/demand
for special modes to edit files too large to be read into emacs in one
go.Personally, I'd rather see effort put towards other areas which would
prove more beneficial. For example, it would be great to see Emacs w3
revived and efforts put in to add javascript support so that you could
visit more sites without having to leave emacs and get all that emacs
goodness as well. It would be good to see an interface into various
pacakge management systems, such as apt for Debian based systems. It
would be good to see further development or more developers working on
some of the really good packages to make them even better (i.e. auctex,
planner-mode, org-mode, glient, SES, VM, etc or just a whole new mode to
add functionality that hasn't yet been thought of and which would/may
have a real benefit to users.

Note that I'm not trying to create an arguement and as I stated, I can
fully appreciate the desire to just see if it can be done. I just don't
see any real benefit apart from the intellectual exercise (which may be
sufficient justification for many). 

Tim

-- 
tcross (at) rapttech dot com dot au


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-31  5:57               ` Tim X
@ 2008-01-31 15:35                 ` Stefan Monnier
  0 siblings, 0 replies; 34+ messages in thread
From: Stefan Monnier @ 2008-01-31 15:35 UTC (permalink / raw)
  To: help-gnu-emacs

> I can't see a use case for editing extremely large files with a text
> editor and I think there are plenty of good tools for this already. The
> fact that once you move to a 64 bit platform, the maximum file size
> increases to the point that means there would be even less need/demand
> for special modes to edit files too large to be read into emacs in one
> go.Personally, I'd rather see effort put towards other areas which would

Actually, I've several times had to look for something specific and
maybe even change it in some very large file.  The kind of situation
where this occurred made it difficult to use other tools because
I wasn't quite sure what I was looking for (regexp isearch is great for
that) and because the file was often binary, so line-oriented tools
don't work so well.  Yes, I could probably have done it with other
tools, but doing it with Emacs was easier.

Luckily, while large those were not larger than Emacs's limits.
But even tho Emacs could handle them, it had trouble handling them
because of the time it took to load the file and the swapping it caused.
On a 64bit system, you should be able to load a 10GB file if you feel
like it, but based on my knowledge of Emacs's internals, I can assure
you that you'll bump into bugs (many places where we turn 64bit Elisp
integers into 32bit C "int"), and that unless your machine has more than
16GB of RAM, it'll be painfully slow.  So some major mode to browse and
even edit very large files does sound like a good idea, even in
a 64bit world.  But it's not nearly high enough on my todo list to have
a chance in the next ... 10 years?


        Stefan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]   ` <mailman.6646.1201548710.18990.help-gnu-emacs@gnu.org>
  2008-01-30 15:12     ` Stefan Monnier
@ 2008-01-31 22:55     ` Ilya Zakharevich
       [not found]     ` <200801312255.m0VMt701019096@powdermilk.math.berkeley.edu>
  2 siblings, 0 replies; 34+ messages in thread
From: Ilya Zakharevich @ 2008-01-31 22:55 UTC (permalink / raw)
  To: help-gnu-emacs

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1059 bytes --]

[A complimentary Cc of this posting was sent to
Eli Zaretskii 
<eliz@gnu.org>], who wrote in article <mailman.6646.1201548710.18990.help-gnu-emacs@gnu.org>:
> > > • is there a general solution to work with files (elisp) without
> > > actually loading the whole file?

> > Not really, since visiting a file reads all of it into an Emacs buffer.

> The problem is not with the buffer size per se, it's with the fact
> that Emacs needs to be able to address each byte of the file's text
> with an Emacs integer data type, which is 29 bit wide on 32-bit
> machines.

Are you sure?  I think it should be enough to address each char in the
buffer, plus 2 "guard zones" immediately before and after the buffer,
plus two "guard zones" at start and end of the file.

E.g., if the guard zone size is 1MB, then the "actual chunk of file"
goes from offset 2M to offset 126M in the buffer; accessing anything
from offset 1M to 2M "scrolls back" the chunk; accessing anything from
offset 0 to 1M loads the chunk at start of file, etc.

Why won't this work?

Yours,
Ilya


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]     ` <200801312255.m0VMt701019096@powdermilk.math.berkeley.edu>
@ 2008-02-01 11:04       ` Eli Zaretskii
       [not found]       ` <mailman.6836.1201863892.18990.help-gnu-emacs@gnu.org>
  1 sibling, 0 replies; 34+ messages in thread
From: Eli Zaretskii @ 2008-02-01 11:04 UTC (permalink / raw)
  To: help-gnu-emacs

> Date: Thu, 31 Jan 2008 14:55:07 -0800 (PST)
> From: Ilya Zakharevich <nospam-abuse@ilyaz.org>
> 
> > The problem is not with the buffer size per se, it's with the fact
> > that Emacs needs to be able to address each byte of the file's text
> > with an Emacs integer data type, which is 29 bit wide on 32-bit
> > machines.
> 
> Are you sure?

See src/buffer.h, where it defines `struct buffer_text'.  It has these
members:

    EMACS_INT gpt;              /* Char pos of gap in buffer.  */
    EMACS_INT z;                /* Char pos of end of buffer.  */
    EMACS_INT gpt_byte;         /* Byte pos of gap in buffer.  */
    EMACS_INT z_byte;           /* Byte pos of end of buffer.  */
    EMACS_INT gap_size;         /* Size of buffer's gap.  */

and then `struct buffer' has this:

    /* Char position of point in buffer.  */
    EMACS_INT pt;
    /* Byte position of point in buffer.  */
    EMACS_INT pt_byte;
    /* Char position of beginning of accessible range.  */
    EMACS_INT begv;
    /* Byte position of beginning of accessible range.  */
    EMACS_INT begv_byte;
    /* Char position of end of accessible range.  */
    EMACS_INT zv;
    /* Byte position of end of accessible range.  */
    EMACS_INT zv_byte;

(On 32-bit machines, EMACS_INT is the 29-bit wide integer I was
talking about) So yes, I'm quite sure.

> I think it should be enough to address each char in the
> buffer, plus 2 "guard zones" immediately before and after the buffer,
> plus two "guard zones" at start and end of the file.
> 
> E.g., if the guard zone size is 1MB, then the "actual chunk of file"
> goes from offset 2M to offset 126M in the buffer; accessing anything
> from offset 1M to 2M "scrolls back" the chunk; accessing anything from
> offset 0 to 1M loads the chunk at start of file, etc.
> 
> Why won't this work?

Maybe it would, but I wasn't trying to describe some inherent
limitation of 32-bit machines, I was describing the limitation of the
_current_ Emacs implementation.  The OP wanted to know how can Emacs
be used to edit large files, not how Emacs can be modified.




^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
       [not found]       ` <mailman.6836.1201863892.18990.help-gnu-emacs@gnu.org>
@ 2008-02-01 22:26         ` Ilya Zakharevich
  0 siblings, 0 replies; 34+ messages in thread
From: Ilya Zakharevich @ 2008-02-01 22:26 UTC (permalink / raw)
  To: help-gnu-emacs

[A complimentary Cc of this posting was sent to
Eli Zaretskii 
<eliz@gnu.org>], who wrote in article <mailman.6836.1201863892.18990.help-gnu-emacs@gnu.org>:
> > > The problem is not with the buffer size per se, it's with the fact
> > > that Emacs needs to be able to address each byte of the file's text
> > > with an Emacs integer data type, which is 29 bit wide on 32-bit
> > > machines.
> > 
> > Are you sure?
> 
> See src/buffer.h, where it defines `struct buffer_text'.  It has these
> members:

My point was that the maximal buffer size has NOTHING to do with the
size of the file which may be HANDLED by this buffer.

> > Why won't this work?

> Maybe it would, but I wasn't trying to describe some inherent
> limitation of 32-bit machines, I was describing the limitation of the
> _current_ Emacs implementation.  The OP wanted to know how can Emacs
> be used to edit large files, not how Emacs can be modified.

I was not discussing modifications to Emacs.  I was discussing how to
use Emacs to show files which are larger than the buffer.

All you need to know is that the "guard regions" are touched.  This is
definitely possible (by file-mode code) for display.  I do not know
whether this is possible to do for search too...

Hope this helps,
Ilya


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 10:43       ` Johan Bockgård
  2008-01-29 15:35         ` Andreas Röhler
@ 2008-02-06  1:25         ` Samuel Karl Peterson
  2008-02-17 16:01           ` Kevin Rodgers
  1 sibling, 1 reply; 34+ messages in thread
From: Samuel Karl Peterson @ 2008-02-06  1:25 UTC (permalink / raw)
  To: help-gnu-emacs

bojohan+news@dd.chalmers.se (Johan Bockgård) on Tue, 29 Jan 2008
11:43:04 +0100 didst step forth and proclaim thus:

> How about using FUSE?
>
> http://chunkfs.florz.de/
> http://sourceforge.net/projects/joinsplitfs/
>

I love FUSE, but that's not an option for people who aren't on
GNU/Linux.

-- 
Sam Peterson
skpeterson At nospam ucdavis.edu
"if programmers were paid to remove code instead of adding it,
software would be much better" -- unknown


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29  9:08           ` Tim X
  2008-01-29 16:34             ` Xah Lee
@ 2008-02-06  1:47             ` Samuel Karl Peterson
  1 sibling, 0 replies; 34+ messages in thread
From: Samuel Karl Peterson @ 2008-02-06  1:47 UTC (permalink / raw)
  To: help-gnu-emacs

Tim X <timx@nospam.dev.null> on Tue, 29 Jan 2008 20:08:42 +1100 didst
step forth and proclaim thus:

> However, it seems rediculous to attempt to open a text file of the
> size Xah is talking about inside an editor. Like others, I have to
> wonder why his log file isn't rotated more often so that it is in
> manageable chunks. Its obvious that nobody would read all of a text
> file that was that large (especially not every week). More than
> likely, you would use existing tools to select 'interesting' parts
> of the log and then deal with them. Personally, I'd use something
> like Perl or one of the many other scripting languages that are
> ideal for (and largely designed for) this sort of problem.

Funny enough, as other people have said, while it's not a common use
case, it happens and it can be useful to use something like an editor
because you don't know exactly what you're looking for.

I have been an ardent Emacs user for a number of years, but I gotta
say, this is one of the few things Vim really does "right".  They even
have plugins to help with the process:

http://www.vim.org/scripts/script.php?script_id=1506

I've never had any difficulty working on huge binary files with Vim.

There are plenty of other applications that make the efficient ability
to work with enormous files highly desirable.  Emacs' hexl-mode and
tar file mode come immediately to mind.

The fact that other people have done it, that the Emacs community
brags that there's nothing Emacs can't do or be used for and that this
has been something that I have know that Emacs hasn't been able to do
for as long as I can remember...well, it just ought to come across as
a little bit embarassing to the Emacs devs.  Just a smidgin.

-- 
Sam Peterson
skpeterson At nospam ucdavis.edu
"if programmers were paid to remove code instead of adding it,
software would be much better" -- unknown


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-28 21:50       ` Jason Rumney
                           ` (3 preceding siblings ...)
  2008-01-30 14:55         ` Stefan Monnier
@ 2008-02-06 16:42         ` Mathias Dahl
  2008-02-06 16:55           ` Mathias Dahl
  4 siblings, 1 reply; 34+ messages in thread
From: Mathias Dahl @ 2008-02-06 16:42 UTC (permalink / raw)
  To: help-gnu-emacs

Jason Rumney <jasonrumney@gmail.com> writes:

> Such solutions have been proposed before, but the likely way that a
> user will navigate through such a huge file is by searching, so just
> paging parts of the file in and out is only part of the solution, it
> must also offer searching to be useful.

For the record, I tried to implement this some time ago:

http://www.emacswiki.org/cgi-bin/wiki/VLF

First I used head and tail, then I used insert-file-contents with beg
and end arguments and ran into the integer problem. However, if one
would switch back to use head and tail (or whatever other tool you
prefer), I am sure this will be possible if we use floating point
numbers instead. If someone would like to extend my hack with this,
feel free to do so, at least you have something to start with.

Read the wiki page, read the code.

/Mathias


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-02-06 16:42         ` Mathias Dahl
@ 2008-02-06 16:55           ` Mathias Dahl
  0 siblings, 0 replies; 34+ messages in thread
From: Mathias Dahl @ 2008-02-06 16:55 UTC (permalink / raw)
  To: help-gnu-emacs

Mathias Dahl <brakjoller@gmail.com> writes:

> For the record, I tried to implement this some time ago:
>
> http://www.emacswiki.org/cgi-bin/wiki/VLF
>
> First I used head and tail, then I used insert-file-contents with beg
> and end arguments and ran into the integer problem. However, if one
> would switch back to use head and tail (or whatever other tool you
> prefer), I am sure this will be possible if we use floating point
> numbers instead. If someone would like to extend my hack with this,
> feel free to do so, at least you have something to start with.
>
> Read the wiki page, read the code.

BTW, to use the head/tail approach:

(setq vlf-external-extraction t)

/Mathias


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-01-29 16:34             ` Xah Lee
                                 ` (2 preceding siblings ...)
  2008-01-31  5:57               ` Tim X
@ 2008-02-08 11:25               ` Giacomo Boffi
  3 siblings, 0 replies; 34+ messages in thread
From: Giacomo Boffi @ 2008-02-08 11:25 UTC (permalink / raw)
  To: help-gnu-emacs

Xah Lee <xah@xahlee.org> writes:

> in my particular case, my file happens to be 0.5 gig,

afaik, you can open a 500MB file using XEmacs 
-- 
> In tutti noi c'è un lato interista 
Lato perlopiù nascosto dalle mutande. 
                        --- Basil Fawlty, a reti unificate (IFQ+ISC)


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: opening large files (few hundred meg)
  2008-02-06  1:25         ` Samuel Karl Peterson
@ 2008-02-17 16:01           ` Kevin Rodgers
  0 siblings, 0 replies; 34+ messages in thread
From: Kevin Rodgers @ 2008-02-17 16:01 UTC (permalink / raw)
  To: help-gnu-emacs

Samuel Karl Peterson wrote:
> bojohan+news@dd.chalmers.se (Johan Bockgård) on Tue, 29 Jan 2008
> 11:43:04 +0100 didst step forth and proclaim thus:
> 
>> How about using FUSE?
>>
>> http://chunkfs.florz.de/
>> http://sourceforge.net/projects/joinsplitfs/
>>
> 
> I love FUSE, but that's not an option for people who aren't on
> GNU/Linux.

FUSE also works on Mac OS X.  But in any case, a potential feature
should not be dismissed from consideration solely because it would not
be available on non-free platforms.

-- 
Kevin Rodgers
Denver, Colorado, USA





^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2008-02-17 16:01 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-28 17:35 opening large files (few hundred meg) Xah Lee
2008-01-28 18:05 ` Sven Joachim
2008-01-28 19:31   ` Eli Zaretskii
2008-01-28 20:36     ` Andreas Röhler
     [not found]     ` <mailman.6652.1201552566.18990.help-gnu-emacs@gnu.org>
2008-01-28 21:50       ` Jason Rumney
2008-01-29  7:07         ` Andreas Röhler
2008-01-29  7:20         ` Thierry Volpiatto
     [not found]         ` <mailman.6666.1201591238.18990.help-gnu-emacs@gnu.org>
2008-01-29  9:08           ` Tim X
2008-01-29 16:34             ` Xah Lee
2008-01-29 19:06               ` Tom Tromey
2008-01-29 20:44                 ` Eli Zaretskii
     [not found]                 ` <mailman.6705.1201639469.18990.help-gnu-emacs@gnu.org>
2008-01-30 20:01                   ` Stefan Monnier
2008-01-30 22:04                     ` Eli Zaretskii
2008-01-29 22:10               ` Jason Rumney
2008-01-30 17:08                 ` Joel J. Adamson
2008-01-31  5:57               ` Tim X
2008-01-31 15:35                 ` Stefan Monnier
2008-02-08 11:25               ` Giacomo Boffi
2008-02-06  1:47             ` Samuel Karl Peterson
2008-01-29 14:52           ` Joel J. Adamson
2008-01-30 14:55         ` Stefan Monnier
2008-02-06 16:42         ` Mathias Dahl
2008-02-06 16:55           ` Mathias Dahl
2008-01-29 10:43       ` Johan Bockgård
2008-01-29 15:35         ` Andreas Röhler
2008-02-06  1:25         ` Samuel Karl Peterson
2008-02-17 16:01           ` Kevin Rodgers
2008-01-29 16:33       ` Ted Zlatanov
     [not found]   ` <mailman.6646.1201548710.18990.help-gnu-emacs@gnu.org>
2008-01-30 15:12     ` Stefan Monnier
2008-01-30 16:55       ` Sven Joachim
2008-01-30 21:53         ` Stefan Monnier
2008-01-31 22:55     ` Ilya Zakharevich
     [not found]     ` <200801312255.m0VMt701019096@powdermilk.math.berkeley.edu>
2008-02-01 11:04       ` Eli Zaretskii
     [not found]       ` <mailman.6836.1201863892.18990.help-gnu-emacs@gnu.org>
2008-02-01 22:26         ` Ilya Zakharevich

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.