unofficial mirror of emacs-devel@gnu.org 
 help / color / mirror / code / Atom feed
* url-cache - (require 'url)
@ 2006-01-01 17:48 David Reitter
  2006-01-02  5:08 ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: David Reitter @ 2006-01-01 17:48 UTC (permalink / raw)


I think url-cache.el needs a (require 'url) to define url- 
configuration-directory which is (now) needed to define url-cache- 
directory. Otherwise, a simple (require 'url-cache) fails unless url  
isn't loaded beforehand.

--Lisp error: (void-variable url-configuration-directory)
   (expand-file-name "cache" url-configuration-directory)
   eval((expand-file-name "cache" url-configuration-directory))
   custom-initialize-reset(url-cache-directory (expand-file-name  
"cache" url-configuration-directory))
   custom-declare-variable(url-cache-directory (expand-file-name  
"cache" url-configuration-directory) ("/Applications/Aquamacs  
Emacs.app/Contents/Resources/lisp/url/url-cache.elc" . -662) :type  
directory :group url-file)
   require(url-cache)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-01 17:48 url-cache - (require 'url) David Reitter
@ 2006-01-02  5:08 ` Stefan Monnier
  2006-01-02  9:47   ` David Reitter
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-02  5:08 UTC (permalink / raw)
  Cc: Emacs-Devel '

> I think url-cache.el needs a (require 'url) to define url-
> configuration-directory which is (now) needed to define url-cache-
> directory. Otherwise, a simple (require 'url-cache) fails unless url isn't
> loaded beforehand.

> --Lisp error: (void-variable url-configuration-directory)
>    (expand-file-name "cache" url-configuration-directory)
>    eval((expand-file-name "cache" url-configuration-directory))
>    custom-initialize-reset(url-cache-directory (expand-file-name  "cache"
>    url-configuration-directory))
>    custom-declare-variable(url-cache-directory (expand-file-name  "cache"
>    url-configuration-directory) ("/Applications/Aquamacs
>    Emacs.app/Contents/Resources/lisp/url/url-cache.elc" . -662) :type
>    directory :group url-file)
>    require(url-cache)

When I removed the autoload on url-configuration-directory I ended up
convincing myself that url-cache.el was never loaded before url.el.
You're probably right that a `require' is in order, but I'd be interested
to know if you've bumped into a real-life case where it's needed.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-02  5:08 ` Stefan Monnier
@ 2006-01-02  9:47   ` David Reitter
  2006-01-02 16:25     ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: David Reitter @ 2006-01-02  9:47 UTC (permalink / raw)
  Cc: Emacs-Devel '

On 2 Jan 2006, at 05:08, Stefan Monnier wrote:

>> I think url-cache.el needs a (require 'url) to define url-
>> configuration-directory which is (now) needed to define url-cache-
>> directory. Otherwise, a simple (require 'url-cache) fails unless  
>> url isn't
>> loaded beforehand.
>
> When I removed the autoload on url-configuration-directory I ended up
> convincing myself that url-cache.el was never loaded before url.el.
> You're probably right that a `require' is in order, but I'd be  
> interested
> to know if you've bumped into a real-life case where it's needed.

Start up with -Q, then do

(url-http (url-generic-parse-url  "http://google.com") nil  nil )

This should autoload some things, but doesn't load url [early  
enough], leading to the above problem.

In my actual case, I had the following sequence of `require's before  
a call to `url-http', provoking the problem in a different manner:

       (require 'url-parse)
       (require 'url-methods)
       (require 'url-cache)

I inserted these in order to avoid showing "loading" messages in the  
echo area, because I find they are not needed in the standard case  
and lead to information overload.

- D

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-02  9:47   ` David Reitter
@ 2006-01-02 16:25     ` Stefan Monnier
  2006-01-02 18:03       ` David Reitter
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-02 16:25 UTC (permalink / raw)
  Cc: Emacs-Devel '

>>> I think url-cache.el needs a (require 'url) to define url-
>>> configuration-directory which is (now) needed to define url-cache-
>>> directory. Otherwise, a simple (require 'url-cache) fails unless  url
>>> isn't loaded beforehand.
>> 
>> When I removed the autoload on url-configuration-directory I ended up
>> convincing myself that url-cache.el was never loaded before url.el.
>> You're probably right that a `require' is in order, but I'd be
>> interested to know if you've bumped into a real-life case where
>> it's needed.

> Start up with -Q, then do

> (url-http (url-generic-parse-url "http://google.com") nil  nil )

Hmm... and where does this code come from?
Why isnt't it using url-retrieve(-synchronously) instead?


        Stefan


> I inserted these in order to avoid showing "loading" messages in the  echo
> area, because I find they are not needed in the standard case and lead to
> information overload.

Yes, the "loading" messages don't make much sense to me either: why output
them for autoload but not for require?
I think they should go the same way as the GC messages.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-02 16:25     ` Stefan Monnier
@ 2006-01-02 18:03       ` David Reitter
  2006-01-03  1:46         ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: David Reitter @ 2006-01-02 18:03 UTC (permalink / raw)
  Cc: Emacs-Devel '

On 2 Jan 2006, at 16:25, Stefan Monnier wrote:

>> Start up with -Q, then do
>
>> (url-http (url-generic-parse-url "http://google.com") nil  nil )
>
> Hmm... and where does this code come from?
> Why isnt't it using url-retrieve(-synchronously) instead?

Could do that, but url-http is documented to do what it does, and it  
works fine for me and others. Is url-retrieve higher-level, and  
should it be used?

The above code stems from my packages. Of course, I give it a proper  
callback function and fetch a different URL.

- D

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-02 18:03       ` David Reitter
@ 2006-01-03  1:46         ` Stefan Monnier
  2006-01-03  9:51           ` David Reitter
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-03  1:46 UTC (permalink / raw)
  Cc: Emacs-Devel '

>>> Start up with -Q, then do
>> 
>>> (url-http (url-generic-parse-url "http://google.com") nil  nil )
>> 
>> Hmm... and where does this code come from?
>> Why isnt't it using url-retrieve(-synchronously) instead?

> Could do that, but url-http is documented to do what it does, and it works
> fine for me and others.

Where is it documented apart from its docstring?

> Is url-retrieve higher-level, and should it be used?

Yes.  See the anemic texinfo docs.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-03  1:46         ` Stefan Monnier
@ 2006-01-03  9:51           ` David Reitter
  2006-01-03 15:54             ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: David Reitter @ 2006-01-03  9:51 UTC (permalink / raw)
  Cc: Emacs-Devel '

On 3 Jan 2006, at 01:46, Stefan Monnier wrote:
>
>> Is url-retrieve higher-level, and should it be used?
>
> Yes.  See the anemic texinfo docs.

Fine.
Using it now, it complains about two things:

/Users/dr/.emacs.d/url/history does not exist.
Could not load cookie file /Users/dr/.emacs.d/url/cookies

url-history-track is nil. Why load some history file when all I want  
to do is retrieve a defined URL.
I don't use cookies, so can I prevent it from loading the cookie file.

See, I'm using url-retrieve programmatically and without user  
interaction every three days when my Emacs is started. This is to  
check whether there is a new version on the server. Printing  
unnecessary messages is a no-no in this situation. So is wasting CPU  
time.
I'm binding all sorts of things temporarily (like `url-automatic- 
caching' or `url-confirmation-func') just in case the user customizes  
these things. But as soon as a new option is added, I'll have to  
update my code.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-03  9:51           ` David Reitter
@ 2006-01-03 15:54             ` Stefan Monnier
  2006-01-03 16:15               ` David Reitter
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-03 15:54 UTC (permalink / raw)
  Cc: Emacs-Devel '

>>> Is url-retrieve higher-level, and should it be used?
>> Yes.  See the anemic texinfo docs.

> Fine.
> Using it now, it complains about two things:

> /Users/dr/.emacs.d/url/history does not exist.
> Could not load cookie file /Users/dr/.emacs.d/url/cookies

Could you make a proper bug report?  I have no idea what you've done to get
the above messages, so it's kind of hard to debug it.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-03 15:54             ` Stefan Monnier
@ 2006-01-03 16:15               ` David Reitter
  2006-01-03 20:05                 ` Stefan Monnier
  2006-01-05 22:11                 ` Stefan Monnier
  0 siblings, 2 replies; 22+ messages in thread
From: David Reitter @ 2006-01-03 16:15 UTC (permalink / raw)
  Cc: Emacs-Devel '

On 3 Jan 2006, at 15:54, Stefan Monnier wrote:
>
> Could you make a proper bug report?  I have no idea what you've  
> done to get
> the above messages, so it's kind of hard to debug it.

I wouldn't consider it a bug -- but the messages in the echo area and  
in *Messages* make url-retrieve less useful in a programmatic usage  
context, where I don't want to annoy the user with unnecessary warnings.
Again, the warnings that appear are

/Users/dr/.emacs.d/url/history does not exist.
Could not load cookie file /Users/dr/.emacs.d/url/cookies

I have no ~/.emacs.d/url directory. The warnings appear only upon  
first invocation of the below code.

I can reproduce with a plain current Emacs (2006-01-02) and the  
following code:


(defun test-done ()
   (switch-to-buffer retr-buffer))

(require 'url)
(let ((url "http://www.gnu.org/")
       (url-automatic-caching nil)
       (url-standalone-mode nil)
       (url-show-status nil)
       (url-history-track nil)
       (url-confirmation-func (lambda (x) t)))
; HTTP-GET

(setq retr-buffer
       (url-retrieve url
		    'test-done  nil)))


PS.: If url-retrieve is the highest entry point, shouldn't it be  
autoload?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-03 16:15               ` David Reitter
@ 2006-01-03 20:05                 ` Stefan Monnier
  2006-01-05 22:11                 ` Stefan Monnier
  1 sibling, 0 replies; 22+ messages in thread
From: Stefan Monnier @ 2006-01-03 20:05 UTC (permalink / raw)
  Cc: Emacs-Devel '

> PS.: If url-retrieve is the highest entry point, shouldn't it be  autoload?

It is.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-03 16:15               ` David Reitter
  2006-01-03 20:05                 ` Stefan Monnier
@ 2006-01-05 22:11                 ` Stefan Monnier
  2006-01-06 18:12                   ` Mark Plaksin
  1 sibling, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-05 22:11 UTC (permalink / raw)
  Cc: Emacs-Devel '

>> Could you make a proper bug report?  I have no idea what you've  done to
>> get the above messages, so it's kind of hard to debug it.

> I wouldn't consider it a bug -- but the messages in the echo area and  in
> *Messages* make url-retrieve less useful in a programmatic usage context,
> where I don't want to annoy the user with unnecessary warnings.

I see your point.  Maybe there should be a more "raw" command than
url-retrieve, which doesn't try and fiddle with history, cache, ...

> Again, the warnings that appear are

> /Users/dr/.emacs.d/url/history does not exist.
> Could not load cookie file /Users/dr/.emacs.d/url/cookies

I've just remove them.  After all, this situation is completely normal.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-05 22:11                 ` Stefan Monnier
@ 2006-01-06 18:12                   ` Mark Plaksin
  2006-01-09  1:37                     ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Plaksin @ 2006-01-06 18:12 UTC (permalink / raw)


Stefan Monnier <monnier@iro.umontreal.ca> writes:

>>> Could you make a proper bug report?  I have no idea what you've  done to
>>> get the above messages, so it's kind of hard to debug it.
>
>> I wouldn't consider it a bug -- but the messages in the echo area and  in
>> *Messages* make url-retrieve less useful in a programmatic usage context,
>> where I don't want to annoy the user with unnecessary warnings.
>
> I see your point.  Maybe there should be a more "raw" command than
> url-retrieve, which doesn't try and fiddle with history, cache, ...

It would also be nice if there was an easy way to get at the HTTP headers
associated with a response.  Maybe there is a way but I can't find it.

I'm trying to make nnrss do the right thing for feeds which use ETag [1]
and am fumbling through advising url-retrieve-synchronously to get the
necessary headers.  Something like this:

(defadvice url-retrieve-synchronously (after map-save-headers)
  "Save headers in nnrss-feed-headers."
  ;; FIXME: Somehow save-excursion didn't work.
  ;;(save-excursion
  (let ((buffer (current-buffer)))
    (switch-to-buffer ad-return-value)
    (mail-narrow-to-head)
    (setq nnrss-feed-headers
          `(("Last-Modified" . ,(mail-fetch-field "Last-Modified"))
            ("ETag" . ,(mail-fetch-field "ETag"))))
    (widen)
    (switch-to-buffer buffer))))

Footnotes: 
[1]  http://fishbowl.pastiche.org/2002/10/21/http_conditional_get_for_rss_hackers

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-06 18:12                   ` Mark Plaksin
@ 2006-01-09  1:37                     ` Stefan Monnier
  2006-01-14 18:32                       ` Mark Plaksin
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-09  1:37 UTC (permalink / raw)
  Cc: emacs-devel

> It would also be nice if there was an easy way to get at the HTTP headers
> associated with a response.  Maybe there is a way but I can't find it.

Maybe for that, url-http is more appropriate.
After all, there won't be any HTTL headers if the URL is not using HTTP.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-09  1:37                     ` Stefan Monnier
@ 2006-01-14 18:32                       ` Mark Plaksin
  2006-01-15  4:45                         ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Plaksin @ 2006-01-14 18:32 UTC (permalink / raw)


[-- Attachment #1: Type: text/plain, Size: 886 bytes --]

Stefan Monnier <monnier@iro.umontreal.ca> writes:

>> It would also be nice if there was an easy way to get at the HTTP headers
>> associated with a response.  Maybe there is a way but I can't find it.
>
> Maybe for that, url-http is more appropriate.
> After all, there won't be any HTTL headers if the URL is not using HTTP.

Right.  How about this patch to url-http.el?

Test like this:
(setq url-http-save-headers t)
(url-retrieve-synchronously "http://rss.slashdot.org/Slashdot/slashdot")
(setq url-http-save-headers nil)

After that url-http-headers should look like this:
(("Content-Type" . "text/xml;charset=utf-8") ("Transfer-Encoding" . "chunked") ("Connection" . "Keep-Alive") ("Keep-Alive" . "timeout=5, max=100") ("ETag" . "xRkHefoJaorPexoLfvkTcHeudUY") ("Last-Modified" . "Sat, 14 Jan 2006 18:29:23 GMT") ("Server" . "Apache") ("Date" . "Sat, 14 Jan 2006 18:30:49 GMT"))


[-- Attachment #2: url-http.saveheaders.diff --]
[-- Type: text/plain, Size: 2028 bytes --]

--- url-http.el.orig	2006-01-14 13:11:31.000000000 -0500
+++ url-http.el	2006-01-14 13:30:37.000000000 -0500
@@ -67,6 +67,15 @@
 request.
 ")
 
+(defvar url-http-save-headers nil
+  "If t save HTTP headers in url-http-headers.
+Defaults to nil.  Useful toggle switch for applications that need  access
+to HTTP headers.")
+
+(defvar url-http-headers nil
+  "Alist of HTTP headers
+Populated when a URL is retrieved and url-http-save-headers is set to t.")
+
 ;(eval-when-compile
 ;; These are all macros so that they are hidden from external sight
 ;; when the file is byte-compiled.
@@ -382,6 +391,8 @@
   (url-http-parse-response)
   (mail-narrow-to-head)
   ;;(narrow-to-region (point-min) url-http-end-of-headers)
+  (if url-http-save-headers
+      (setq url-http-headers (url-http-get-all-headers)))
   (let ((class nil)
 	(success nil))
     (setq class (/ url-http-response-status 100))
@@ -1229,6 +1240,32 @@
     (if buffer (kill-buffer buffer))
     options))
 
+(defun url-http-get-all-headers ()
+  "Return an alist of HTTP headers.
+Assumes it is called from a buffer which has been narrowed to the headers."
+  (save-excursion
+    (let (opoint header value alist)
+      (point-min)
+      ;; Skip the HTTP status
+      (forward-line 1)
+      (while (not (= (point) (point-max)))
+        (setq opoint (point))
+        (re-search-forward ":")
+        (forward-char -1)
+        (setq header (buffer-substring-no-properties opoint (point)))
+        (re-search-forward ":[\t ]*")
+        (setq opoint (point))
+        ;; find the end of the header
+        (while (progn (forward-line 1)
+                      (looking-at "[ \t]")))
+        ;; Back up over newline, then trailing spaces or tabs
+        (forward-char -1)
+        (skip-chars-backward " \t" opoint)
+        (setq value (buffer-substring-no-properties opoint (point)))
+        (forward-line 1)
+        (add-to-list 'alist `(,header . ,value)))
+      alist)))
+
 (provide 'url-http)
 
 ;; arch-tag: ba7c59ae-c0f4-4a31-9617-d85f221732ee

[-- Attachment #3: Type: text/plain, Size: 142 bytes --]

_______________________________________________
Emacs-devel mailing list
Emacs-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/emacs-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-14 18:32                       ` Mark Plaksin
@ 2006-01-15  4:45                         ` Stefan Monnier
  2006-01-15  4:59                           ` Mark Plaksin
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-15  4:45 UTC (permalink / raw)
  Cc: emacs-devel

>>> It would also be nice if there was an easy way to get at the HTTP headers
>>> associated with a response.  Maybe there is a way but I can't find it.
>> 
>> Maybe for that, url-http is more appropriate.
>> After all, there won't be any HTTL headers if the URL is not using HTTP.

> Right.  How about this patch to url-http.el?

> Test like this:
> (setq url-http-save-headers t)
> (url-retrieve-synchronously "http://rss.slashdot.org/Slashdot/slashdot")
> (setq url-http-save-headers nil)

I'm not sure I understand what you mean by "agreed".  At first it seems you
agree that url-http is preferable, but then the example code that uses your
patch uses url-retrieve.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-15  4:45                         ` Stefan Monnier
@ 2006-01-15  4:59                           ` Mark Plaksin
  2006-01-15  6:09                             ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Plaksin @ 2006-01-15  4:59 UTC (permalink / raw)


Stefan Monnier <monnier@iro.umontreal.ca> writes:

>>>> It would also be nice if there was an easy way to get at the HTTP headers
>>>> associated with a response.  Maybe there is a way but I can't find it.
>>> 
>>> Maybe for that, url-http is more appropriate.
>>> After all, there won't be any HTTL headers if the URL is not using HTTP.
>
>> Right.  How about this patch to url-http.el?
>
>> Test like this:
>> (setq url-http-save-headers t)
>> (url-retrieve-synchronously "http://rss.slashdot.org/Slashdot/slashdot")
>> (setq url-http-save-headers nil)
>
> I'm not sure I understand what you mean by "agreed".  At first it seems you
> agree that url-http is preferable, but then the example code that uses your
> patch uses url-retrieve.

I agree that the code which stores HTTP headers in a variable belongs in
url-http.el.  It's easier to test by calling url-retrieve but you can also
test by calling url-http:

(url-http (url-generic-parse-url "http://www.usg.edu/") (lambda ()) nil)
(setq url-http-save-headers t)
(url-retrieve-synchronously "http://rss.slashdot.org/Slashdot/slashdot")
(setq url-http-save-headers nil)

At this point url-http-headers should contain the HTTP headers returned by
www.usg.edu.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-15  4:59                           ` Mark Plaksin
@ 2006-01-15  6:09                             ` Stefan Monnier
  2006-01-15 18:18                               ` Mark Plaksin
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-15  6:09 UTC (permalink / raw)
  Cc: emacs-devel

>>>>> It would also be nice if there was an easy way to get at the HTTP headers
>>>>> associated with a response.  Maybe there is a way but I can't find it.
>>>> 
>>>> Maybe for that, url-http is more appropriate.
>>>> After all, there won't be any HTTL headers if the URL is not using HTTP.
>> 
>>> Right.  How about this patch to url-http.el?
>> 
>>> Test like this:
>>> (setq url-http-save-headers t)
>>> (url-retrieve-synchronously "http://rss.slashdot.org/Slashdot/slashdot")
>>> (setq url-http-save-headers nil)
>> 
>> I'm not sure I understand what you mean by "agreed".  At first it seems you
>> agree that url-http is preferable, but then the example code that uses your
>> patch uses url-retrieve.

> I agree that the code which stores HTTP headers in a variable belongs in
> url-http.el.  It's easier to test by calling url-retrieve but you can also
> test by calling url-http:

> (url-http (url-generic-parse-url "http://www.usg.edu/") (lambda ()) nil)
> (setq url-http-save-headers t)
> (url-retrieve-synchronously "http://rss.slashdot.org/Slashdot/slashdot")
> (setq url-http-save-headers nil)

> At this point url-http-headers should contain the HTTP headers returned by
> www.usg.edu.

BTW, the docstring of url-retrieve says:

   CALLBACK is called when the object has been completely retrieved, with
   the current buffer containing the object, and any MIME headers associated
   with it.

so the headers should be readily available already.


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-15  6:09                             ` Stefan Monnier
@ 2006-01-15 18:18                               ` Mark Plaksin
  2006-01-16  2:26                                 ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Plaksin @ 2006-01-15 18:18 UTC (permalink / raw)


Stefan Monnier <monnier@iro.umontreal.ca> writes:

> BTW, the docstring of url-retrieve says:
>
>    CALLBACK is called when the object has been completely retrieved, with
>    the current buffer containing the object, and any MIME headers associated
>    with it.
>
> so the headers should be readily available already.

Hmm, you're right.  Perhaps the docstring should say "any MIME or protocol
headers".  When I read the docstring the first time I assumed it didn't
include HTTP headers.

Maybe there's a better approach to the problem I'm trying to solve.  I want
to add support for ETags to nnrss in Gnus.  To do that, nnrss needs access
to HTTP headers.  nnrss currently uses mm-url-insert which calls
url-insert-file-contents.  Those seem like the right functions to use but
they don't provide access to the HTTP headers.

Should I try changing mm-url-insert so it doesn't use
url-insert-file-contents and gives access to the headers?  I can also
advise one of the functions and have the advice fetch the headers.  It
seems like it would be useful to have that functionality in
url-insert-file-contents (or some part of the URL package itself) but maybe
not.

What do you think?

Thanks!

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-15 18:18                               ` Mark Plaksin
@ 2006-01-16  2:26                                 ` Stefan Monnier
  2006-01-16  2:49                                   ` Mark Plaksin
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-16  2:26 UTC (permalink / raw)
  Cc: emacs-devel

> Maybe there's a better approach to the problem I'm trying to solve.  I want
> to add support for ETags to nnrss in Gnus.  To do that, nnrss needs access
> to HTTP headers.  nnrss currently uses mm-url-insert which calls
> url-insert-file-contents.  Those seem like the right functions to use but
> they don't provide access to the HTTP headers.

Why do they seem better to you than url-retrieve (or even url-http) given
the fact that they do not give you the info you need?


        Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-16  2:26                                 ` Stefan Monnier
@ 2006-01-16  2:49                                   ` Mark Plaksin
  2006-01-16 18:45                                     ` Stefan Monnier
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Plaksin @ 2006-01-16  2:49 UTC (permalink / raw)


Stefan Monnier <monnier@iro.umontreal.ca> writes:

>> Maybe there's a better approach to the problem I'm trying to solve.  I want
>> to add support for ETags to nnrss in Gnus.  To do that, nnrss needs access
>> to HTTP headers.  nnrss currently uses mm-url-insert which calls
>> url-insert-file-contents.  Those seem like the right functions to use but
>> they don't provide access to the HTTP headers.
>
> Why do they seem better to you than url-retrieve (or even url-http) given
> the fact that they do not give you the info you need?

Because url-insert-file-contents does some coding-system things that I
assume are important for inserting URLs into buffers.  I don't know
anything about coding-systems.  I could write a new function that uses
url-retrieve or url-http but I'd probably have to duplicate the
coding-system parts which seems wasteful.

It also seems that other applications that want to insert the contents of a
URL could benefit from having access to HTTP headers.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-16  2:49                                   ` Mark Plaksin
@ 2006-01-16 18:45                                     ` Stefan Monnier
  2006-02-19 19:56                                       ` Mark Plaksin
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Monnier @ 2006-01-16 18:45 UTC (permalink / raw)
  Cc: emacs-devel

>>> Maybe there's a better approach to the problem I'm trying to solve.  I want
>>> to add support for ETags to nnrss in Gnus.  To do that, nnrss needs access
>>> to HTTP headers.  nnrss currently uses mm-url-insert which calls
>>> url-insert-file-contents.  Those seem like the right functions to use but
>>> they don't provide access to the HTTP headers.
>> 
>> Why do they seem better to you than url-retrieve (or even url-http) given
>> the fact that they do not give you the info you need?

> Because url-insert-file-contents does some coding-system things that I
> assume are important for inserting URLs into buffers.  I don't know
> anything about coding-systems.  I could write a new function that uses
> url-retrieve or url-http but I'd probably have to duplicate the
> coding-system parts which seems wasteful.

> It also seems that other applications that want to insert the contents of a
> URL could benefit from having access to HTTP headers.

Good point.

I guess a good answer to that is simply to make url-insert-file-contents
"trivial" by moving most of its contents to a separate function.
Say url-insert, as in the patch below.  Does that provide the functionality
you're looking for?


        Stefan


--- url-handlers.el	03 jan 2006 11:16:30 -0500	1.16
+++ url-handlers.el	16 jan 2006 13:42:03 -0500	
@@ -197,33 +197,47 @@
     (url-copy-file url filename)
     filename))
 
+(defun url-insert (buffer &optional beg end)
+  "Insert the body of a URL object.
+BUFFER should be a complete URL buffer as returned by `url-retrieve'.
+If the headers specify a coding-system, it is applied to the body before it is inserted.
+Returns a list of the form (SIZE CHARSET), where SIZE is the size in bytes
+of the inserted text and CHARSET is the charset that was specified in the header,
+or nil if none was found.
+BEG and END can be used to only insert a subpart of the body.
+They count bytes from the beginning of the body."
+  (let* ((handle (with-current-buffer buffer (mm-dissect-buffer t)))
+         (data (with-current-buffer (mm-handle-buffer handle)
+                 (if beg
+                     (buffer-substring (+ (point-min) beg)
+                                       (if end (+ (point-min) end) (point-max)))
+		   (buffer-string))))
+         (charset (mail-content-type-get (mm-handle-type handle)
+                                          'charset)))
+    (mm-destroy-parts handle)
+    (if charset
+        (insert (mm-decode-string data (mm-charset-to-coding-system charset)))
+      (insert data))
+    (list (length data) charset)))
+
 ;;;###autoload
 (defun url-insert-file-contents (url &optional visit beg end replace)
-  (let ((buffer (url-retrieve-synchronously url))
-	(handle nil)
-	(charset nil)
-	(data nil))
+  (let ((buffer (url-retrieve-synchronously url)))
     (if (not buffer)
 	(error "Opening input file: No such file or directory, %s" url))
     (if visit (setq buffer-file-name url))
-    (with-current-buffer buffer
-      (setq handle (mm-dissect-buffer t))
-      (set-buffer (mm-handle-buffer handle))
-      (setq data (if beg (buffer-substring beg end)
-		   (buffer-string))))
-    (kill-buffer buffer)
-    (mm-destroy-parts handle)
-    (if replace (delete-region (point-min) (point-max)))
     (save-excursion
-      (setq charset (mail-content-type-get (mm-handle-type handle)
-					     'charset))
-      (let ((start (point)))
-	(if charset
-	    (insert (mm-decode-string data (mm-charset-to-coding-system charset)))
-	  (progn
-	    (insert data)
-	    (decode-coding-inserted-region start (point) url visit beg end replace)))))
-    (list url (length data))))
+      (let* ((start (point))
+             (size-and-charset (url-insert buffer beg end)))
+        (kill-buffer buffer)
+        (when replace
+          (delete-region (point-min) start)
+          (delete-region (point) (point-max)))
+        (unless (cadr size-and-charset)
+          ;; If the headers don't specify any particular charset, use the
+          ;; usual heuristic/rules that we apply to files.
+          (decode-coding-inserted-region start (point) url visit beg end replace))
+        (list url (car size-and-charset))))))
 
 (defun url-file-name-completion (url directory)
   (error "Unimplemented"))

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: url-cache - (require 'url)
  2006-01-16 18:45                                     ` Stefan Monnier
@ 2006-02-19 19:56                                       ` Mark Plaksin
  0 siblings, 0 replies; 22+ messages in thread
From: Mark Plaksin @ 2006-02-19 19:56 UTC (permalink / raw)


Stefan Monnier <monnier@iro.umontreal.ca> writes:

>>>> Maybe there's a better approach to the problem I'm trying to solve.  I want
>>>> to add support for ETags to nnrss in Gnus.  To do that, nnrss needs access
>>>> to HTTP headers.  nnrss currently uses mm-url-insert which calls
>>>> url-insert-file-contents.  Those seem like the right functions to use but
>>>> they don't provide access to the HTTP headers.
>>> 
>>> Why do they seem better to you than url-retrieve (or even url-http) given
>>> the fact that they do not give you the info you need?
>
>> Because url-insert-file-contents does some coding-system things that I
>> assume are important for inserting URLs into buffers.  I don't know
>> anything about coding-systems.  I could write a new function that uses
>> url-retrieve or url-http but I'd probably have to duplicate the
>> coding-system parts which seems wasteful.
>
>> It also seems that other applications that want to insert the contents of a
>> URL could benefit from having access to HTTP headers.
>
> Good point.
>
> I guess a good answer to that is simply to make url-insert-file-contents
> "trivial" by moving most of its contents to a separate function.
> Say url-insert, as in the patch below.  Does that provide the functionality
> you're looking for?

(Sorry for the huge delay.  Busy life.)

This looks good to me.  The idea being that any application which needs
access to HTTP headers should call url-retrieve{,-synchronously} and then
use url-insert if it wants to insert the body in a buffer, right?

Thanks!

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2006-02-19 19:56 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-01-01 17:48 url-cache - (require 'url) David Reitter
2006-01-02  5:08 ` Stefan Monnier
2006-01-02  9:47   ` David Reitter
2006-01-02 16:25     ` Stefan Monnier
2006-01-02 18:03       ` David Reitter
2006-01-03  1:46         ` Stefan Monnier
2006-01-03  9:51           ` David Reitter
2006-01-03 15:54             ` Stefan Monnier
2006-01-03 16:15               ` David Reitter
2006-01-03 20:05                 ` Stefan Monnier
2006-01-05 22:11                 ` Stefan Monnier
2006-01-06 18:12                   ` Mark Plaksin
2006-01-09  1:37                     ` Stefan Monnier
2006-01-14 18:32                       ` Mark Plaksin
2006-01-15  4:45                         ` Stefan Monnier
2006-01-15  4:59                           ` Mark Plaksin
2006-01-15  6:09                             ` Stefan Monnier
2006-01-15 18:18                               ` Mark Plaksin
2006-01-16  2:26                                 ` Stefan Monnier
2006-01-16  2:49                                   ` Mark Plaksin
2006-01-16 18:45                                     ` Stefan Monnier
2006-02-19 19:56                                       ` Mark Plaksin

Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/emacs.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).