2019-09-06 03:56:45 +01:00
|
|
|
//! Functions dealing with attributes and meta items.
|
2011-06-28 15:25:20 -07:00
|
|
|
|
Avoid more `MetaItem`-to-`Attribute` conversions.
There is code for converting `Attribute` (syntactic) to `MetaItem`
(semantic). There is also code for the reverse direction. The reverse
direction isn't really necessary; it's currently only used when
generating attributes, e.g. in `derive` code.
This commit adds some new functions for creating `Attributes`s directly,
without involving `MetaItem`s: `mk_attr_word`, `mk_attr_name_value_str`,
`mk_attr_nested_word`, and
`ExtCtxt::attr_{word,name_value_str,nested_word}`.
These new methods replace the old functions for creating `Attribute`s:
`mk_attr_inner`, `mk_attr_outer`, and `ExtCtxt::attribute`. Those
functions took `MetaItem`s as input, and relied on many other functions
that created `MetaItems`, which are also removed: `mk_name_value_item`,
`mk_list_item`, `mk_word_item`, `mk_nested_word_item`,
`{MetaItem,MetaItemKind,NestedMetaItem}::token_trees`,
`MetaItemKind::attr_args`, `MetaItemLit::{from_lit_kind,to_token}`,
`ExtCtxt::meta_word`.
Overall this cuts more than 100 lines of code and makes thing simpler.
2022-11-29 18:43:44 +11:00
|
|
|
use crate::ast::{AttrArgs, AttrArgsEq, AttrId, AttrItem, AttrKind, AttrStyle, AttrVec, Attribute};
|
|
|
|
use crate::ast::{DelimArgs, Expr, ExprKind, LitKind, MetaItemLit};
|
2023-08-02 09:56:26 +10:00
|
|
|
use crate::ast::{MetaItem, MetaItemKind, NestedMetaItem, NormalAttr};
|
2022-12-05 15:23:27 +11:00
|
|
|
use crate::ast::{Path, PathSegment, DUMMY_NODE_ID};
|
Overhaul `MacArgs::Eq`.
The value in `MacArgs::Eq` is currently represented as a `Token`.
Because of `TokenKind::Interpolated`, `Token` can be either a token or
an arbitrary AST fragment. In practice, a `MacArgs::Eq` starts out as a
literal or macro call AST fragment, and then is later lowered to a
literal token. But this is very non-obvious. `Token` is a much more
general type than what is needed.
This commit restricts things, by introducing a new type `MacArgsEqKind`
that is either an AST expression (pre-lowering) or an AST literal
(post-lowering). The downside is that the code is a bit more verbose in
a few places. The benefit is that makes it much clearer what the
possibilities are (though also shorter in some other places). Also, it
removes one use of `TokenKind::Interpolated`, taking us a step closer to
removing that variant, which will let us make `Token` impl `Copy` and
remove many "handle Interpolated" code paths in the parser.
Things to note:
- Error messages have improved. Messages like this:
```
unexpected token: `"bug" + "found"`
```
now say "unexpected expression", which makes more sense. Although
arbitrary expressions can exist within tokens thanks to
`TokenKind::Interpolated`, that's not obvious to anyone who doesn't
know compiler internals.
- In `parse_mac_args_common`, we no longer need to collect tokens for
the value expression.
2022-04-29 06:52:01 +10:00
|
|
|
use crate::ptr::P;
|
2022-04-26 15:40:14 +03:00
|
|
|
use crate::token::{self, CommentKind, Delimiter, Token};
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
use crate::tokenstream::{DelimSpan, Spacing, TokenTree};
|
2022-09-09 17:15:53 +10:00
|
|
|
use crate::tokenstream::{LazyAttrTokenStream, TokenStream};
|
2022-04-16 23:49:37 +03:00
|
|
|
use crate::util::comments;
|
2022-12-05 15:23:27 +11:00
|
|
|
use crate::util::literal::escape_string_symbol;
|
2020-01-11 09:59:14 +01:00
|
|
|
use rustc_index::bit_set::GrowableBitSet;
|
2020-04-19 13:00:18 +02:00
|
|
|
use rustc_span::symbol::{sym, Ident, Symbol};
|
2019-12-31 20:15:40 +03:00
|
|
|
use rustc_span::Span;
|
2017-03-03 09:23:59 +00:00
|
|
|
use std::iter;
|
2022-09-13 15:35:44 +08:00
|
|
|
use std::sync::atomic::{AtomicU32, Ordering};
|
2022-11-23 11:55:16 +11:00
|
|
|
use thin_vec::{thin_vec, ThinVec};
|
2012-03-02 13:14:10 -08:00
|
|
|
|
2020-07-30 11:27:50 +10:00
|
|
|
pub struct MarkedAttrs(GrowableBitSet<AttrId>);
|
2020-01-11 09:59:14 +01:00
|
|
|
|
2020-07-30 11:27:50 +10:00
|
|
|
impl MarkedAttrs {
|
|
|
|
pub fn new() -> Self {
|
2022-11-27 11:15:06 +00:00
|
|
|
// We have no idea how many attributes there will be, so just
|
|
|
|
// initiate the vectors with 0 bits. We'll grow them as necessary.
|
2020-07-30 11:27:50 +10:00
|
|
|
MarkedAttrs(GrowableBitSet::new_empty())
|
2020-01-11 09:59:14 +01:00
|
|
|
}
|
|
|
|
|
2020-07-30 11:27:50 +10:00
|
|
|
pub fn mark(&mut self, attr: &Attribute) {
|
|
|
|
self.0.insert(attr.id);
|
|
|
|
}
|
2016-11-08 08:30:26 +10:30
|
|
|
|
2020-07-30 11:27:50 +10:00
|
|
|
pub fn is_marked(&self, attr: &Attribute) -> bool {
|
|
|
|
self.0.contains(attr.id)
|
|
|
|
}
|
2016-11-08 08:30:26 +10:30
|
|
|
}
|
|
|
|
|
2023-02-10 03:04:48 +01:00
|
|
|
pub struct AttrIdGenerator(AtomicU32);
|
2016-08-23 03:21:17 +00:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
impl AttrIdGenerator {
|
|
|
|
pub fn new() -> Self {
|
2023-02-10 03:04:48 +01:00
|
|
|
AttrIdGenerator(AtomicU32::new(0))
|
2016-08-19 18:58:14 -07:00
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
pub fn mk_attr_id(&self) -> AttrId {
|
2023-02-10 03:04:48 +01:00
|
|
|
let id = self.0.fetch_add(1, Ordering::Relaxed);
|
|
|
|
assert!(id != u32::MAX);
|
2023-02-11 22:51:21 +04:00
|
|
|
AttrId::from_u32(id)
|
2016-08-19 18:58:14 -07:00
|
|
|
}
|
2023-02-11 22:51:21 +04:00
|
|
|
}
|
2016-08-19 18:58:14 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
impl Attribute {
|
|
|
|
pub fn get_normal_item(&self) -> &AttrItem {
|
|
|
|
match &self.kind {
|
|
|
|
AttrKind::Normal(normal) => &normal.item,
|
|
|
|
AttrKind::DocComment(..) => panic!("unexpected doc comment"),
|
|
|
|
}
|
2016-08-19 18:58:14 -07:00
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
pub fn unwrap_normal_item(self) -> AttrItem {
|
|
|
|
match self.kind {
|
|
|
|
AttrKind::Normal(normal) => normal.into_inner().item,
|
|
|
|
AttrKind::DocComment(..) => panic!("unexpected doc comment"),
|
|
|
|
}
|
2020-11-28 16:11:25 +01:00
|
|
|
}
|
2016-08-19 18:58:14 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
/// Returns `true` if it is a sugared doc comment (`///` or `//!` for example).
|
|
|
|
/// So `#[doc = "doc"]` (which is a doc comment) and `#[doc(...)]` (which is not
|
|
|
|
/// a doc comment) will return `false`.
|
|
|
|
pub fn is_doc_comment(&self) -> bool {
|
|
|
|
match self.kind {
|
|
|
|
AttrKind::Normal(..) => false,
|
|
|
|
AttrKind::DocComment(..) => true,
|
2019-10-24 06:33:12 +11:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-09-06 03:56:45 +01:00
|
|
|
/// For a single-segment attribute, returns its name; otherwise, returns `None`.
|
2019-02-28 09:17:24 +03:00
|
|
|
pub fn ident(&self) -> Option<Ident> {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
|
|
|
AttrKind::Normal(normal) => {
|
|
|
|
if let [ident] = &*normal.item.path.segments {
|
|
|
|
Some(ident.ident)
|
2019-10-24 06:33:12 +11:00
|
|
|
} else {
|
|
|
|
None
|
|
|
|
}
|
|
|
|
}
|
2020-07-21 22:16:19 +03:00
|
|
|
AttrKind::DocComment(..) => None,
|
2019-02-28 09:17:24 +03:00
|
|
|
}
|
|
|
|
}
|
2019-05-08 14:33:06 +10:00
|
|
|
pub fn name_or_empty(&self) -> Symbol {
|
2021-10-17 23:20:30 +03:00
|
|
|
self.ident().unwrap_or_else(Ident::empty).name
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
2016-08-19 18:58:14 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
#[inline]
|
|
|
|
pub fn has_name(&self, name: Symbol) -> bool {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
2023-02-11 22:51:21 +04:00
|
|
|
AttrKind::Normal(normal) => normal.item.path == name,
|
|
|
|
AttrKind::DocComment(..) => false,
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
2013-07-19 21:51:37 +10:00
|
|
|
}
|
2016-07-15 13:13:17 -07:00
|
|
|
|
2023-06-02 13:55:46 +02:00
|
|
|
pub fn path_matches(&self, name: &[Symbol]) -> bool {
|
|
|
|
match &self.kind {
|
|
|
|
AttrKind::Normal(normal) => {
|
|
|
|
normal.item.path.segments.len() == name.len()
|
|
|
|
&& normal
|
|
|
|
.item
|
|
|
|
.path
|
|
|
|
.segments
|
|
|
|
.iter()
|
|
|
|
.zip(name)
|
|
|
|
.all(|(s, n)| s.args.is_none() && s.ident.name == *n)
|
|
|
|
}
|
|
|
|
AttrKind::DocComment(..) => false,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-03 09:23:59 +00:00
|
|
|
pub fn is_word(&self) -> bool {
|
2022-08-11 21:06:11 +10:00
|
|
|
if let AttrKind::Normal(normal) = &self.kind {
|
2022-11-18 11:24:21 +11:00
|
|
|
matches!(normal.item.args, AttrArgs::Empty)
|
2019-10-24 06:33:12 +11:00
|
|
|
} else {
|
|
|
|
false
|
|
|
|
}
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
2011-06-28 11:24:24 -07:00
|
|
|
|
2022-11-23 11:55:16 +11:00
|
|
|
pub fn meta_item_list(&self) -> Option<ThinVec<NestedMetaItem>> {
|
2019-09-26 18:04:05 +01:00
|
|
|
match &self.kind {
|
2023-02-11 22:51:21 +04:00
|
|
|
AttrKind::Normal(normal) => normal.item.meta_item_list(),
|
|
|
|
AttrKind::DocComment(..) => None,
|
2018-10-09 22:54:47 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-11-16 10:52:37 +00:00
|
|
|
pub fn value_str(&self) -> Option<Symbol> {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
2023-02-11 22:51:21 +04:00
|
|
|
AttrKind::Normal(normal) => normal.item.value_str(),
|
|
|
|
AttrKind::DocComment(..) => None,
|
2019-10-24 06:33:12 +11:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-09-12 21:18:59 +02:00
|
|
|
/// Returns the documentation and its kind if this is a doc comment or a sugared doc comment.
|
|
|
|
/// * `///doc` returns `Some(("doc", CommentKind::Line))`.
|
|
|
|
/// * `/** doc */` returns `Some(("doc", CommentKind::Block))`.
|
|
|
|
/// * `#[doc = "doc"]` returns `Some(("doc", CommentKind::Line))`.
|
|
|
|
/// * `#[doc(...)]` returns `None`.
|
2022-01-18 16:56:16 +01:00
|
|
|
pub fn doc_str_and_comment_kind(&self) -> Option<(Symbol, CommentKind)> {
|
2023-02-01 20:26:05 +04:00
|
|
|
match &self.kind {
|
|
|
|
AttrKind::DocComment(kind, data) => Some((*data, *kind)),
|
|
|
|
AttrKind::Normal(normal) if normal.item.path == sym::doc => {
|
|
|
|
normal.item.value_str().map(|s| (s, CommentKind::Line))
|
|
|
|
}
|
2022-01-18 16:56:16 +01:00
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-09-12 21:18:59 +02:00
|
|
|
/// Returns the documentation if this is a doc comment or a sugared doc comment.
|
|
|
|
/// * `///doc` returns `Some("doc")`.
|
|
|
|
/// * `#[doc = "doc"]` returns `Some("doc")`.
|
|
|
|
/// * `#[doc(...)]` returns `None`.
|
2019-12-07 21:28:29 +03:00
|
|
|
pub fn doc_str(&self) -> Option<Symbol> {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
|
|
|
AttrKind::DocComment(.., data) => Some(*data),
|
2023-02-01 20:26:05 +04:00
|
|
|
AttrKind::Normal(normal) if normal.item.path == sym::doc => normal.item.value_str(),
|
2019-12-07 21:28:29 +03:00
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-04-16 23:49:37 +03:00
|
|
|
pub fn may_have_doc_links(&self) -> bool {
|
2023-05-24 14:19:22 +00:00
|
|
|
self.doc_str().is_some_and(|s| comments::may_have_doc_links(s.as_str()))
|
2022-04-16 23:49:37 +03:00
|
|
|
}
|
|
|
|
|
2023-03-19 21:32:34 +04:00
|
|
|
pub fn is_proc_macro_attr(&self) -> bool {
|
|
|
|
[sym::proc_macro, sym::proc_macro_attribute, sym::proc_macro_derive]
|
|
|
|
.iter()
|
|
|
|
.any(|kind| self.has_name(*kind))
|
|
|
|
}
|
|
|
|
|
2019-08-18 01:10:56 +03:00
|
|
|
/// Extracts the MetaItem from inside this Attribute.
|
|
|
|
pub fn meta(&self) -> Option<MetaItem> {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
|
|
|
AttrKind::Normal(normal) => normal.item.meta(self.span),
|
2019-12-07 21:28:29 +03:00
|
|
|
AttrKind::DocComment(..) => None,
|
2019-10-24 06:33:12 +11:00
|
|
|
}
|
2019-08-18 01:10:56 +03:00
|
|
|
}
|
2020-11-05 20:27:48 +03:00
|
|
|
|
2021-12-26 16:47:08 +01:00
|
|
|
pub fn meta_kind(&self) -> Option<MetaItemKind> {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
|
|
|
AttrKind::Normal(normal) => normal.item.meta_kind(),
|
2021-12-26 16:47:08 +01:00
|
|
|
AttrKind::DocComment(..) => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-09-09 16:23:39 +10:00
|
|
|
pub fn tokens(&self) -> TokenStream {
|
2022-11-16 19:26:38 +00:00
|
|
|
match &self.kind {
|
|
|
|
AttrKind::Normal(normal) => normal
|
2022-08-11 21:06:11 +10:00
|
|
|
.tokens
|
2020-11-05 20:27:48 +03:00
|
|
|
.as_ref()
|
2022-12-19 10:31:55 +01:00
|
|
|
.unwrap_or_else(|| panic!("attribute is missing tokens: {self:?}"))
|
2022-09-09 17:15:53 +10:00
|
|
|
.to_attr_token_stream()
|
2022-09-09 16:23:39 +10:00
|
|
|
.to_tokenstream(),
|
2022-11-16 19:26:38 +00:00
|
|
|
&AttrKind::DocComment(comment_kind, data) => TokenStream::new(vec![TokenTree::Token(
|
2022-09-09 16:23:39 +10:00
|
|
|
Token::new(token::DocComment(comment_kind, self.style, data), self.span),
|
|
|
|
Spacing::Alone,
|
|
|
|
)]),
|
2020-11-05 20:27:48 +03:00
|
|
|
}
|
|
|
|
}
|
2011-07-05 17:01:23 -07:00
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
impl AttrItem {
|
|
|
|
pub fn span(&self) -> Span {
|
|
|
|
self.args.span().map_or(self.path.span, |args_span| self.path.span.to(args_span))
|
|
|
|
}
|
2022-09-13 15:35:44 +08:00
|
|
|
|
2022-11-23 11:55:16 +11:00
|
|
|
fn meta_item_list(&self) -> Option<ThinVec<NestedMetaItem>> {
|
2023-02-11 22:51:21 +04:00
|
|
|
match &self.args {
|
2023-08-02 09:56:26 +10:00
|
|
|
AttrArgs::Delimited(args) if args.delim == Delimiter::Parenthesis => {
|
2023-02-11 22:51:21 +04:00
|
|
|
MetaItemKind::list_from_tokens(args.tokens.clone())
|
2022-09-13 15:35:44 +08:00
|
|
|
}
|
2023-02-11 22:51:21 +04:00
|
|
|
AttrArgs::Delimited(_) | AttrArgs::Eq(..) | AttrArgs::Empty => None,
|
|
|
|
}
|
2022-09-02 16:29:40 +08:00
|
|
|
}
|
2016-07-15 13:13:17 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
fn value_str(&self) -> Option<Symbol> {
|
|
|
|
match &self.args {
|
|
|
|
AttrArgs::Eq(_, args) => args.value_str(),
|
|
|
|
AttrArgs::Delimited(_) | AttrArgs::Empty => None,
|
|
|
|
}
|
2022-09-02 16:29:40 +08:00
|
|
|
}
|
2014-05-20 00:07:24 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
pub fn meta(&self, span: Span) -> Option<MetaItem> {
|
|
|
|
Some(MetaItem { path: self.path.clone(), kind: self.meta_kind()?, span })
|
2022-08-11 21:06:11 +10:00
|
|
|
}
|
2016-07-15 13:13:17 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
pub fn meta_kind(&self) -> Option<MetaItemKind> {
|
|
|
|
MetaItemKind::from_attr_args(&self.args)
|
|
|
|
}
|
2019-09-06 23:41:54 +03:00
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
impl AttrArgsEq {
|
|
|
|
fn value_str(&self) -> Option<Symbol> {
|
|
|
|
match self {
|
|
|
|
AttrArgsEq::Ast(expr) => match expr.kind {
|
|
|
|
ExprKind::Lit(token_lit) => {
|
|
|
|
LitKind::from_token_lit(token_lit).ok().and_then(|lit| lit.str())
|
|
|
|
}
|
|
|
|
_ => None,
|
|
|
|
},
|
|
|
|
AttrArgsEq::Hir(lit) => lit.kind.str(),
|
|
|
|
}
|
|
|
|
}
|
2016-07-15 13:13:17 -07:00
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
impl MetaItem {
|
|
|
|
/// For a single-segment meta item, returns its name; otherwise, returns `None`.
|
|
|
|
pub fn ident(&self) -> Option<Ident> {
|
|
|
|
if self.path.segments.len() == 1 { Some(self.path.segments[0].ident) } else { None }
|
|
|
|
}
|
2011-07-13 18:13:19 -07:00
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
pub fn name_or_empty(&self) -> Symbol {
|
|
|
|
self.ident().unwrap_or_else(Ident::empty).name
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn has_name(&self, name: Symbol) -> bool {
|
|
|
|
self.path == name
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn is_word(&self) -> bool {
|
|
|
|
matches!(self.kind, MetaItemKind::Word)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn meta_item_list(&self) -> Option<&[NestedMetaItem]> {
|
|
|
|
match &self.kind {
|
|
|
|
MetaItemKind::List(l) => Some(&**l),
|
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// ```text
|
|
|
|
/// Example:
|
|
|
|
/// #[attribute(name = "value")]
|
|
|
|
/// ^^^^^^^^^^^^^^
|
|
|
|
/// ```
|
|
|
|
pub fn name_value_literal(&self) -> Option<&MetaItemLit> {
|
|
|
|
match &self.kind {
|
|
|
|
MetaItemKind::NameValue(v) => Some(v),
|
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// This is used in case you want the value span instead of the whole attribute. Example:
|
|
|
|
///
|
|
|
|
/// ```text
|
|
|
|
/// #[doc(alias = "foo")]
|
|
|
|
/// ```
|
|
|
|
///
|
|
|
|
/// In here, it'll return a span for `"foo"`.
|
|
|
|
pub fn name_value_literal_span(&self) -> Option<Span> {
|
|
|
|
Some(self.name_value_literal()?.span)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn value_str(&self) -> Option<Symbol> {
|
|
|
|
self.kind.value_str()
|
|
|
|
}
|
2016-08-19 18:58:14 -07:00
|
|
|
|
2023-07-27 10:00:04 +10:00
|
|
|
fn from_tokens<'a, I>(tokens: &mut iter::Peekable<I>) -> Option<MetaItem>
|
2017-03-03 09:23:59 +00:00
|
|
|
where
|
2023-07-27 10:00:04 +10:00
|
|
|
I: Iterator<Item = &'a TokenTree>,
|
2017-03-03 09:23:59 +00:00
|
|
|
{
|
2018-04-24 16:57:41 +02:00
|
|
|
// FIXME: Share code with `parse_path`.
|
2023-07-27 10:10:32 +10:00
|
|
|
let path = match tokens.next().map(|tt| TokenTree::uninterpolate(tt)).as_deref() {
|
|
|
|
Some(&TokenTree::Token(
|
|
|
|
Token { kind: ref kind @ (token::Ident(..) | token::ModSep), span },
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
_,
|
|
|
|
)) => 'arm: {
|
2023-07-27 10:10:32 +10:00
|
|
|
let mut segments = if let &token::Ident(name, _) = kind {
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
if let Some(TokenTree::Token(Token { kind: token::ModSep, .. }, _)) =
|
|
|
|
tokens.peek()
|
2019-06-05 14:17:56 +03:00
|
|
|
{
|
2019-01-02 02:21:05 +03:00
|
|
|
tokens.next();
|
2022-09-08 17:22:52 +10:00
|
|
|
thin_vec![PathSegment::from_ident(Ident::new(name, span))]
|
2019-01-02 02:21:05 +03:00
|
|
|
} else {
|
2019-06-05 11:56:06 +03:00
|
|
|
break 'arm Path::from_ident(Ident::new(name, span));
|
2018-01-30 14:30:39 +09:00
|
|
|
}
|
|
|
|
} else {
|
2022-09-08 17:22:52 +10:00
|
|
|
thin_vec![PathSegment::path_root(span)]
|
2019-01-02 02:21:05 +03:00
|
|
|
};
|
|
|
|
loop {
|
2023-07-27 10:10:32 +10:00
|
|
|
if let Some(&TokenTree::Token(Token { kind: token::Ident(name, _), span }, _)) =
|
|
|
|
tokens.next().map(|tt| TokenTree::uninterpolate(tt)).as_deref()
|
2019-06-05 14:17:56 +03:00
|
|
|
{
|
2019-06-05 11:56:06 +03:00
|
|
|
segments.push(PathSegment::from_ident(Ident::new(name, span)));
|
2019-01-02 02:21:05 +03:00
|
|
|
} else {
|
|
|
|
return None;
|
|
|
|
}
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
if let Some(TokenTree::Token(Token { kind: token::ModSep, .. }, _)) =
|
|
|
|
tokens.peek()
|
2019-06-05 14:17:56 +03:00
|
|
|
{
|
2019-01-02 02:21:05 +03:00
|
|
|
tokens.next();
|
|
|
|
} else {
|
|
|
|
break;
|
|
|
|
}
|
2018-01-30 14:30:39 +09:00
|
|
|
}
|
2019-01-02 02:21:05 +03:00
|
|
|
let span = span.with_hi(segments.last().unwrap().ident.span.hi());
|
2020-08-21 18:51:23 -04:00
|
|
|
Path { span, segments, tokens: None }
|
2018-01-30 14:30:39 +09:00
|
|
|
}
|
2023-07-27 10:10:32 +10:00
|
|
|
Some(TokenTree::Token(Token { kind: token::Interpolated(nt), .. }, _)) => match &**nt {
|
2022-11-16 19:26:38 +00:00
|
|
|
token::Nonterminal::NtMeta(item) => return item.meta(item.path.span),
|
|
|
|
token::Nonterminal::NtPath(path) => (**path).clone(),
|
2017-03-29 07:17:18 +00:00
|
|
|
_ => return None,
|
2017-03-08 23:13:35 +00:00
|
|
|
},
|
2017-03-03 09:23:59 +00:00
|
|
|
_ => return None,
|
|
|
|
};
|
2017-07-31 23:04:34 +03:00
|
|
|
let list_closing_paren_pos = tokens.peek().map(|tt| tt.span().hi());
|
2019-09-26 18:04:05 +01:00
|
|
|
let kind = MetaItemKind::from_tokens(tokens)?;
|
2022-11-16 19:26:38 +00:00
|
|
|
let hi = match &kind {
|
|
|
|
MetaItemKind::NameValue(lit) => lit.span.hi(),
|
2019-03-02 19:15:26 +03:00
|
|
|
MetaItemKind::List(..) => list_closing_paren_pos.unwrap_or(path.span.hi()),
|
|
|
|
_ => path.span.hi(),
|
2017-08-17 21:58:01 +09:00
|
|
|
};
|
2019-03-02 19:15:26 +03:00
|
|
|
let span = path.span.with_hi(hi);
|
2019-09-26 18:04:05 +01:00
|
|
|
Some(MetaItem { path, kind, span })
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl MetaItemKind {
|
2021-12-26 16:47:08 +01:00
|
|
|
pub fn value_str(&self) -> Option<Symbol> {
|
|
|
|
match self {
|
2023-02-01 20:26:05 +04:00
|
|
|
MetaItemKind::NameValue(v) => v.kind.str(),
|
2021-12-26 16:47:08 +01:00
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-11-23 11:55:16 +11:00
|
|
|
fn list_from_tokens(tokens: TokenStream) -> Option<ThinVec<NestedMetaItem>> {
|
2023-07-27 10:00:04 +10:00
|
|
|
let mut tokens = tokens.trees().peekable();
|
2022-11-23 11:55:16 +11:00
|
|
|
let mut result = ThinVec::new();
|
2021-01-17 19:06:12 -05:00
|
|
|
while tokens.peek().is_some() {
|
2019-03-03 20:56:24 +03:00
|
|
|
let item = NestedMetaItem::from_tokens(&mut tokens)?;
|
|
|
|
result.push(item);
|
2017-03-03 09:23:59 +00:00
|
|
|
match tokens.next() {
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
None | Some(TokenTree::Token(Token { kind: token::Comma, .. }, _)) => {}
|
2017-03-03 09:23:59 +00:00
|
|
|
_ => return None,
|
|
|
|
}
|
|
|
|
}
|
2023-02-01 20:26:05 +04:00
|
|
|
Some(result)
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
2019-12-01 19:16:44 +03:00
|
|
|
|
2023-07-27 10:00:04 +10:00
|
|
|
fn name_value_from_tokens<'a>(
|
|
|
|
tokens: &mut impl Iterator<Item = &'a TokenTree>,
|
2019-12-01 19:16:44 +03:00
|
|
|
) -> Option<MetaItemKind> {
|
|
|
|
match tokens.next() {
|
2022-04-26 15:40:14 +03:00
|
|
|
Some(TokenTree::Delimited(_, Delimiter::Invisible, inner_tokens)) => {
|
2023-07-27 10:00:04 +10:00
|
|
|
MetaItemKind::name_value_from_tokens(&mut inner_tokens.trees())
|
2020-06-20 20:59:04 -04:00
|
|
|
}
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
Some(TokenTree::Token(token, _)) => {
|
2022-11-23 15:39:42 +11:00
|
|
|
MetaItemLit::from_token(&token).map(MetaItemKind::NameValue)
|
2019-12-22 17:42:04 -05:00
|
|
|
}
|
2019-12-01 19:16:44 +03:00
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-27 10:00:04 +10:00
|
|
|
fn from_tokens<'a>(
|
|
|
|
tokens: &mut iter::Peekable<impl Iterator<Item = &'a TokenTree>>,
|
2023-02-11 22:51:21 +04:00
|
|
|
) -> Option<MetaItemKind> {
|
|
|
|
match tokens.peek() {
|
|
|
|
Some(TokenTree::Delimited(_, Delimiter::Parenthesis, inner_tokens)) => {
|
|
|
|
let inner_tokens = inner_tokens.clone();
|
|
|
|
tokens.next();
|
|
|
|
MetaItemKind::list_from_tokens(inner_tokens).map(MetaItemKind::List)
|
|
|
|
}
|
|
|
|
Some(TokenTree::Delimited(..)) => None,
|
|
|
|
Some(TokenTree::Token(Token { kind: token::Eq, .. }, _)) => {
|
|
|
|
tokens.next();
|
|
|
|
MetaItemKind::name_value_from_tokens(tokens)
|
|
|
|
}
|
|
|
|
_ => Some(MetaItemKind::Word),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-11-18 11:24:21 +11:00
|
|
|
fn from_attr_args(args: &AttrArgs) -> Option<MetaItemKind> {
|
2019-12-01 19:16:44 +03:00
|
|
|
match args {
|
2022-11-18 11:24:21 +11:00
|
|
|
AttrArgs::Empty => Some(MetaItemKind::Word),
|
2023-08-02 09:56:26 +10:00
|
|
|
AttrArgs::Delimited(DelimArgs { dspan: _, delim: Delimiter::Parenthesis, tokens }) => {
|
|
|
|
MetaItemKind::list_from_tokens(tokens.clone()).map(MetaItemKind::List)
|
|
|
|
}
|
2022-11-18 11:24:21 +11:00
|
|
|
AttrArgs::Delimited(..) => None,
|
|
|
|
AttrArgs::Eq(_, AttrArgsEq::Ast(expr)) => match expr.kind {
|
Avoid more `MetaItem`-to-`Attribute` conversions.
There is code for converting `Attribute` (syntactic) to `MetaItem`
(semantic). There is also code for the reverse direction. The reverse
direction isn't really necessary; it's currently only used when
generating attributes, e.g. in `derive` code.
This commit adds some new functions for creating `Attributes`s directly,
without involving `MetaItem`s: `mk_attr_word`, `mk_attr_name_value_str`,
`mk_attr_nested_word`, and
`ExtCtxt::attr_{word,name_value_str,nested_word}`.
These new methods replace the old functions for creating `Attribute`s:
`mk_attr_inner`, `mk_attr_outer`, and `ExtCtxt::attribute`. Those
functions took `MetaItem`s as input, and relied on many other functions
that created `MetaItems`, which are also removed: `mk_name_value_item`,
`mk_list_item`, `mk_word_item`, `mk_nested_word_item`,
`{MetaItem,MetaItemKind,NestedMetaItem}::token_trees`,
`MetaItemKind::attr_args`, `MetaItemLit::{from_lit_kind,to_token}`,
`ExtCtxt::meta_word`.
Overall this cuts more than 100 lines of code and makes thing simpler.
2022-11-29 18:43:44 +11:00
|
|
|
ExprKind::Lit(token_lit) => {
|
2022-11-22 16:48:42 +11:00
|
|
|
// Turn failures to `None`, we'll get parse errors elsewhere.
|
2022-11-23 15:39:42 +11:00
|
|
|
MetaItemLit::from_token_lit(token_lit, expr.span)
|
2022-11-22 16:48:42 +11:00
|
|
|
.ok()
|
|
|
|
.map(|lit| MetaItemKind::NameValue(lit))
|
|
|
|
}
|
Overhaul `MacArgs::Eq`.
The value in `MacArgs::Eq` is currently represented as a `Token`.
Because of `TokenKind::Interpolated`, `Token` can be either a token or
an arbitrary AST fragment. In practice, a `MacArgs::Eq` starts out as a
literal or macro call AST fragment, and then is later lowered to a
literal token. But this is very non-obvious. `Token` is a much more
general type than what is needed.
This commit restricts things, by introducing a new type `MacArgsEqKind`
that is either an AST expression (pre-lowering) or an AST literal
(post-lowering). The downside is that the code is a bit more verbose in
a few places. The benefit is that makes it much clearer what the
possibilities are (though also shorter in some other places). Also, it
removes one use of `TokenKind::Interpolated`, taking us a step closer to
removing that variant, which will let us make `Token` impl `Copy` and
remove many "handle Interpolated" code paths in the parser.
Things to note:
- Error messages have improved. Messages like this:
```
unexpected token: `"bug" + "found"`
```
now say "unexpected expression", which makes more sense. Although
arbitrary expressions can exist within tokens thanks to
`TokenKind::Interpolated`, that's not obvious to anyone who doesn't
know compiler internals.
- In `parse_mac_args_common`, we no longer need to collect tokens for
the value expression.
2022-04-29 06:52:01 +10:00
|
|
|
_ => None,
|
|
|
|
},
|
2022-11-18 11:24:21 +11:00
|
|
|
AttrArgs::Eq(_, AttrArgsEq::Hir(lit)) => Some(MetaItemKind::NameValue(lit.clone())),
|
2019-12-01 19:16:44 +03:00
|
|
|
}
|
|
|
|
}
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
|
|
|
|
2019-03-03 20:56:24 +03:00
|
|
|
impl NestedMetaItem {
|
|
|
|
pub fn span(&self) -> Span {
|
2022-11-16 19:26:38 +00:00
|
|
|
match self {
|
|
|
|
NestedMetaItem::MetaItem(item) => item.span,
|
2022-11-24 15:00:09 +11:00
|
|
|
NestedMetaItem::Lit(lit) => lit.span,
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
/// For a single-segment meta item, returns its name; otherwise, returns `None`.
|
|
|
|
pub fn ident(&self) -> Option<Ident> {
|
|
|
|
self.meta_item().and_then(|meta_item| meta_item.ident())
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn name_or_empty(&self) -> Symbol {
|
|
|
|
self.ident().unwrap_or_else(Ident::empty).name
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns `true` if this list item is a MetaItem with a name of `name`.
|
|
|
|
pub fn has_name(&self, name: Symbol) -> bool {
|
2023-05-24 14:19:22 +00:00
|
|
|
self.meta_item().is_some_and(|meta_item| meta_item.has_name(name))
|
2023-02-11 22:51:21 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns `true` if `self` is a `MetaItem` and the meta item is a word.
|
|
|
|
pub fn is_word(&self) -> bool {
|
2023-05-24 14:19:22 +00:00
|
|
|
self.meta_item().is_some_and(|meta_item| meta_item.is_word())
|
2023-02-11 22:51:21 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Gets a list of inner meta items from a list `MetaItem` type.
|
|
|
|
pub fn meta_item_list(&self) -> Option<&[NestedMetaItem]> {
|
|
|
|
self.meta_item().and_then(|meta_item| meta_item.meta_item_list())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns a name and single literal value tuple of the `MetaItem`.
|
|
|
|
pub fn name_value_literal(&self) -> Option<(Symbol, &MetaItemLit)> {
|
|
|
|
self.meta_item().and_then(|meta_item| {
|
|
|
|
meta_item.meta_item_list().and_then(|meta_item_list| {
|
|
|
|
if meta_item_list.len() == 1
|
|
|
|
&& let Some(ident) = meta_item.ident()
|
|
|
|
&& let Some(lit) = meta_item_list[0].lit()
|
|
|
|
{
|
|
|
|
return Some((ident.name, lit));
|
|
|
|
}
|
|
|
|
None
|
|
|
|
})
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
/// See [`MetaItem::name_value_literal_span`].
|
|
|
|
pub fn name_value_literal_span(&self) -> Option<Span> {
|
|
|
|
self.meta_item()?.name_value_literal_span()
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Gets the string value if `self` is a `MetaItem` and the `MetaItem` is a
|
|
|
|
/// `MetaItemKind::NameValue` variant containing a string, otherwise `None`.
|
|
|
|
pub fn value_str(&self) -> Option<Symbol> {
|
|
|
|
self.meta_item().and_then(|meta_item| meta_item.value_str())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns the `MetaItemLit` if `self` is a `NestedMetaItem::Literal`s.
|
|
|
|
pub fn lit(&self) -> Option<&MetaItemLit> {
|
|
|
|
match self {
|
|
|
|
NestedMetaItem::Lit(lit) => Some(lit),
|
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns the `MetaItem` if `self` is a `NestedMetaItem::MetaItem`.
|
|
|
|
pub fn meta_item(&self) -> Option<&MetaItem> {
|
|
|
|
match self {
|
|
|
|
NestedMetaItem::MetaItem(item) => Some(item),
|
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns `true` if the variant is `MetaItem`.
|
|
|
|
pub fn is_meta_item(&self) -> bool {
|
|
|
|
self.meta_item().is_some()
|
|
|
|
}
|
|
|
|
|
2023-07-27 10:00:04 +10:00
|
|
|
fn from_tokens<'a, I>(tokens: &mut iter::Peekable<I>) -> Option<NestedMetaItem>
|
2017-03-03 09:23:59 +00:00
|
|
|
where
|
2023-07-27 10:00:04 +10:00
|
|
|
I: Iterator<Item = &'a TokenTree>,
|
2017-03-03 09:23:59 +00:00
|
|
|
{
|
2020-06-20 20:59:04 -04:00
|
|
|
match tokens.peek() {
|
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-28 10:31:04 +10:00
|
|
|
Some(TokenTree::Token(token, _))
|
2022-11-23 15:39:42 +11:00
|
|
|
if let Some(lit) = MetaItemLit::from_token(token) =>
|
2021-08-16 17:29:49 +02:00
|
|
|
{
|
|
|
|
tokens.next();
|
2022-11-24 15:00:09 +11:00
|
|
|
return Some(NestedMetaItem::Lit(lit));
|
2020-06-20 20:59:04 -04:00
|
|
|
}
|
2022-04-26 15:40:14 +03:00
|
|
|
Some(TokenTree::Delimited(_, Delimiter::Invisible, inner_tokens)) => {
|
2017-03-03 09:23:59 +00:00
|
|
|
tokens.next();
|
2023-07-27 10:00:04 +10:00
|
|
|
return NestedMetaItem::from_tokens(&mut inner_tokens.trees().peekable());
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
2020-06-20 20:59:04 -04:00
|
|
|
_ => {}
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
2019-03-03 20:56:24 +03:00
|
|
|
MetaItem::from_tokens(tokens).map(NestedMetaItem::MetaItem)
|
2017-03-03 09:23:59 +00:00
|
|
|
}
|
|
|
|
}
|
2023-02-11 22:51:21 +04:00
|
|
|
|
|
|
|
pub fn mk_doc_comment(
|
|
|
|
g: &AttrIdGenerator,
|
|
|
|
comment_kind: CommentKind,
|
|
|
|
style: AttrStyle,
|
|
|
|
data: Symbol,
|
|
|
|
span: Span,
|
|
|
|
) -> Attribute {
|
|
|
|
Attribute { kind: AttrKind::DocComment(comment_kind, data), id: g.mk_attr_id(), style, span }
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn mk_attr(
|
|
|
|
g: &AttrIdGenerator,
|
|
|
|
style: AttrStyle,
|
|
|
|
path: Path,
|
|
|
|
args: AttrArgs,
|
|
|
|
span: Span,
|
|
|
|
) -> Attribute {
|
|
|
|
mk_attr_from_item(g, AttrItem { path, args, tokens: None }, None, style, span)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn mk_attr_from_item(
|
|
|
|
g: &AttrIdGenerator,
|
|
|
|
item: AttrItem,
|
|
|
|
tokens: Option<LazyAttrTokenStream>,
|
|
|
|
style: AttrStyle,
|
|
|
|
span: Span,
|
|
|
|
) -> Attribute {
|
|
|
|
Attribute {
|
|
|
|
kind: AttrKind::Normal(P(NormalAttr { item, tokens })),
|
|
|
|
id: g.mk_attr_id(),
|
|
|
|
style,
|
|
|
|
span,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn mk_attr_word(g: &AttrIdGenerator, style: AttrStyle, name: Symbol, span: Span) -> Attribute {
|
|
|
|
let path = Path::from_ident(Ident::new(name, span));
|
|
|
|
let args = AttrArgs::Empty;
|
|
|
|
mk_attr(g, style, path, args, span)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn mk_attr_nested_word(
|
|
|
|
g: &AttrIdGenerator,
|
|
|
|
style: AttrStyle,
|
|
|
|
outer: Symbol,
|
|
|
|
inner: Symbol,
|
|
|
|
span: Span,
|
|
|
|
) -> Attribute {
|
|
|
|
let inner_tokens = TokenStream::new(vec![TokenTree::Token(
|
|
|
|
Token::from_ast_ident(Ident::new(inner, span)),
|
|
|
|
Spacing::Alone,
|
|
|
|
)]);
|
|
|
|
let outer_ident = Ident::new(outer, span);
|
|
|
|
let path = Path::from_ident(outer_ident);
|
|
|
|
let attr_args = AttrArgs::Delimited(DelimArgs {
|
|
|
|
dspan: DelimSpan::from_single(span),
|
2023-08-02 09:56:26 +10:00
|
|
|
delim: Delimiter::Parenthesis,
|
2023-02-11 22:51:21 +04:00
|
|
|
tokens: inner_tokens,
|
|
|
|
});
|
|
|
|
mk_attr(g, style, path, attr_args, span)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn mk_attr_name_value_str(
|
|
|
|
g: &AttrIdGenerator,
|
|
|
|
style: AttrStyle,
|
|
|
|
name: Symbol,
|
|
|
|
val: Symbol,
|
|
|
|
span: Span,
|
|
|
|
) -> Attribute {
|
|
|
|
let lit = token::Lit::new(token::Str, escape_string_symbol(val), None);
|
|
|
|
let expr = P(Expr {
|
|
|
|
id: DUMMY_NODE_ID,
|
|
|
|
kind: ExprKind::Lit(lit),
|
|
|
|
span,
|
|
|
|
attrs: AttrVec::new(),
|
|
|
|
tokens: None,
|
|
|
|
});
|
|
|
|
let path = Path::from_ident(Ident::new(name, span));
|
|
|
|
let args = AttrArgs::Eq(span, AttrArgsEq::Ast(expr));
|
|
|
|
mk_attr(g, style, path, args, span)
|
|
|
|
}
|
|
|
|
|
2023-03-19 21:32:34 +04:00
|
|
|
pub fn filter_by_name(attrs: &[Attribute], name: Symbol) -> impl Iterator<Item = &Attribute> {
|
|
|
|
attrs.iter().filter(move |attr| attr.has_name(name))
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn find_by_name(attrs: &[Attribute], name: Symbol) -> Option<&Attribute> {
|
|
|
|
filter_by_name(attrs, name).next()
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn first_attr_value_str_by_name(attrs: &[Attribute], name: Symbol) -> Option<Symbol> {
|
|
|
|
find_by_name(attrs, name).and_then(|attr| attr.value_str())
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn contains_name(attrs: &[Attribute], name: Symbol) -> bool {
|
|
|
|
find_by_name(attrs, name).is_some()
|
|
|
|
}
|
|
|
|
|
2023-02-11 22:51:21 +04:00
|
|
|
pub fn list_contains_name(items: &[NestedMetaItem], name: Symbol) -> bool {
|
|
|
|
items.iter().any(|item| item.has_name(name))
|
|
|
|
}
|